An econometrician on the SEIRD epidemiological model for Covid-19

There is a new paper by Ivan Korolev:

This paper studies the SEIRD epidemic model for COVID-19. First, I show that the model is poorly identified from the observed number of deaths and confirmed cases. There are many sets of parameters that are observationally equivalent in the short run but lead to markedly different long run forecasts. Second, I demonstrate using the data from Iceland that auxiliary information from random tests can be used to calibrate the initial parameters of the model and reduce the range of possible forecasts about the future number of deaths. Finally, I show that the basic reproduction number R0 can be identified from the data, conditional on the clinical parameters. I then estimate it for the US and several other countries, allowing for possible underreporting of the number of cases. The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries. I demonstrate that if one fails to take underreporting into account and estimates R0 from the cases data, the resulting estimate of R0 will be biased downward and the model will fail to fit the observed data.

Here is the full paper.  And here is Ivan’s brief supplemental note on CFR.  (By the way, here is a new and related Anthony Atkeson paper on estimating the fatality rate.)

And here is a further paper on the IMHE model, by statisticians from CTDS, Northwestern University and the University of Texas, excerpt from the opener:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

Again, I am very happy to present counter evidence to these arguments.  I readily admit this is outside my area of expertise, but I have read through the paper and it is not much more than a few pages of recording numbers and comparing them to the actual outcomes (you will note the model predicts New York fairly well, and thus the predictions are of a “train wreck” nature).

Let me just repeat the two central findings again:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

So now really is the time to be asking tough questions about epidemiology, and yes, epidemiologists.  I would very gladly publish and “signal boost” the best positive response possible.

And just to be clear (again), I fully support current lockdown efforts (best choice until we have more data and also a better theory), I don’t want Fauci to be fired, and I don’t think economists are necessarily better forecasters.  I do feel I am not getting straight answers.

Comments

I recall having seen "deathist" sentiment in this blog. The following should effectively demolish it:

https://www.aging-us.com/article/103001/text

I think you are doing the right thing Tyler. Our decisions are based upon models (and hysteria) and if the models are entirely broken the we have an enormous problem.

But this is a huge risk to your career

>But this is a huge risk to your career

Pffft. Tyler is a reliable Dem voter, every time. He's completely safe.

TC is tenured, so I doubt any risk.

Bonus trivia: https://www.washingtonian.com/2020/04/10/naval-academy-professor-bruce-fleming-sends-shirtless-pics-offends-women-minorities/ - a rather narcissistic English professor at the Naval Academy that the Navy spent years trying to fire; students love him despite or because of his outrageous antics. I had some professors (or at least one) like this before the days of PC.

TC is far more then tenured.

He has a tremendous amount of reach with his various efforts (eg: conversations with Tyler). He is a sought after guest speaker. This can all dry away with just one false step off the beaten path.

Bingo. The left doesn't hesitate to eat its own. (See "Me too" for many examples.)

...I should add, "ordinarily". Biden is exempt, Tyler Cowen probably not so much.

It is great to hold models accountable. I don't remember seeing this blog do that for economic models. Since this is an economics blog, maybe it can start doing that too.

That is exactly what the academic economic literature does with huge back and forth on most of the mainstream macro and other large models. The kind of questioning done by this econometrician is routine and ongoing. Then people complain that economists don't give clearcut advice.

I doubt you have read the blog for any amount of time. This blog has discussed shortfalls and discussion of economic models of all stripes for more than a decade. Also kudos for the "what-about-ism" to deflect having to engage with criticism.

Ok, below, link your best example of a prior MR post that openly criticized an economic model that came from a prominent economist with an ideological affiliation similar to MR. Ideally it should also include a sentence like "I do feel I am not getting straight answers."

Prior_approval, this is such an inane request, he does this all the time and has for over a decade.

By the way, notice how fast the troll jumped from “criticize Econ models” to

criticized an economic model that came from a prominent economist with an ideological affiliation similar to MR

But it also needs a key phrase! “Ideally it should also include a sentence like "I do feel I am not getting straight answers."

Lol. How about someone literally from Mercatus?

https://marginalrevolution.com/marginalrevolution/2016/02/how-tight-is-monetary-policy-now.html

You do realize this example is just philosophy right? Holding an economic model accountable means comparing the quantitative prediction of an econometric model to real-world data. Maybe the reason I know this is because my GRE score is higher than yours.

Is really impressive to see someone who can move a goalpost that’s quickly while also pedaling backwards.

TC on epidemiologists:
"d. What is their overall track record on predictions, whether before or during this crisis?
g. How well do they understand how to model uncertainty of forecasts, relative to say what a top econometrician would know?"
---
There is lots of past discussion of econometric models on MR in a very general style that doesn't offend anyone in particular. When has TC specifically called out a fellow economist's econometric model for having identification problems or missing the target in a forecast with respect to real world data? I have not been able to find it.

See the discusson on Saez and Zucman on the blog last fall. Both are highly prominent economists. It's clear you are not an economist nor a general reader of this blog. Economists are quite vicious among each other at conferences and in the seminar room. You don't know what you are talking about.

The bottom line is that economists aren't good at quantitatively predicting the future. If Tyler looked around at economist's attempts to predict the future, he would find they had failed miserably and make himself a lot of enemies. In fact, he is already aware of this problem, but he doesn't talk about it openly unless the failure is committed by someone with opposite ideology to his. Yet now he wants epidemiological models to successfully predict the future. This is somewhat absurd. The truth is that predicting the future is difficult regardless of your field, and most economists have already given up (or never tried in the first place). If the epidemiological models can guide us to the right course of action, they have done their job.

What utter nonsense. If epidemiological models fail at prediction they (like climate change models) cant't possible "guide us to the right course of action" except randomly.

Now if only he'd apply the same criticisms to Global Warming models, which have been completely wrong 100% of the time.

Is it time to ask questions about climate "scientists"?

No! Of course not!!

Hush! There's lots of career advancement in Global Warmmongering, and none at all on blowing the whistle on it.

Their models have problems too. One significant one is how to model feedbacks such as clouds which influence albedo and IR escape into space. Often they assume a positive feedback , but it can just as easily be negative and this has an impact on the answer. It’s a poorly understood topic.

@Cat - you're sounding more and more like a Baptist teaching evolution. Think of it like this: if albedo is giving negative feedback and global warming is accelerating, then solve for the scary Precautionary Principle equilibrium... especially as more and more white surfaces are eliminated (soot on the poles, asphalt jungle, urbanization).

@Ray, clouds have both effects not just one : albedo, and IR reflection of outgoing earth IR back to earth. It depends on the type of cloud. Whether the net feedback is positive or negative is an open question.
Feedbacks are important, for example el Nino creates a positive feedback for water vapor but a negative one for cloud albedo.
There's also a theory that variation in cloud cover depend on the cosmic ray flux ( H+, alpha particles) from space which is a function of shielding from high solar activity.
see Svenmark
http://faculty.fgcu.edu/twimberley/EnviroPol/EnviroPhilo/MissingLink.pdf
Also the astrophysicist Nir Shavir.
https://www.youtube.com/watch?v=p9gjU1T4XL4

Name one "climate model" that hasn't been wrong on the down side, or "wrong because action was taken to prevent it from occurring", which is what models are supposed to do.

The point of the pandemic modelers is spurring action, not to convince people to do nothing to see how accurate their death predictions are.

The problem for mulp's point with respect to the IMHE model is that it updates to include the impact of government-mandated social distancing measures. Go look at it - for each state it shows a date for educational facilities, a stay-at-home order, etc.

Supporting lockdowns based on poor models is pretty easy to do when you're a tenured academic and the models were produced by other tenured academics. That is to say, when neither you nor they are particularly put out by lockdowns, other than having to move your classes online.

Sounds like something a covid-panic-denialist woudl say ;)

"I do feel I am not getting straight answers."

I agree with all, except the last sentence. I don't think there are any straight answers. Covid-19 is a novel virus, and we know very little about it. So far, all is guessing. Even in economics, where there are so many unknowns, there are years of data to glean information from. We just have to accept the frustration of low information.

Also, remember the warnings that if the lock down works, it'll look like it wasn't necessary. Maybe it was, and maybe not. Just take the win that we aren't all dying. Study the data and prepare for next time.

It's a novel virus but viruses and pandemic are not novel. Sar-Cov was a similar virus. We should have some methodology and applicability of models nailed down by now. The SIR model is close to a century old I think.
It's a bit telling than an amateur( he is an econometrist) can find serious problems. If the models have too much uncertainty due to incomplete and/or unreliable data, it should be known upfront and also broadcast widely what kind of data is urgently needed.
Am I being too harsh ?

"Sar-Cov was a similar virus"
otoh - the limited initial response to covid 19 pandemic probly leaned a little 2 much on the previous mers and sars pandemics

Not one aspect of the trump administration response to SARS-Cov2 was like the Bush administration response to SARS-Cov.

But the Bush administration had been primed by anthrax epidemic fears and actual anthax attacks by workers in the epidemic and pandemic response industry.

otoh- the limited initial response to covid 19 was probly in part due to
the previous experience &limited spread of sars & mers in the u.s.
there was an expectation that covid 19 would also be low risk/low spread

I agree with the statement that if the lockdown works it will look like it was not necessary. However, this is the time to be assessing when and how to ease the lockdown, because if an extended lock down results in deaths due to economic deprivation, suicide, and other unforeseen causes then we might also equally say that the lock down worked, but because of how it was implemented it ended up doing more harm than good.

Are there models out there that take into account the cost (financial and human) of the lockdown vs. the benefit in saved lives ?

The counterfactual is precisely the point. The authoritarians have learned from this manufactured crisis:

1) You can exceed your constitutional authority (federal or state) in the face of even a mild threat and the people will not only refuse to defend their rights but actively assist you in further infringing them.

2) You can escape blame and justify your actions by claiming ("without evidence", to borrow the term the media likes to use for claims not their own) that
the threat would have been a million times worse if not for your actions and the TED Talk crowd will lap it up because ¡SCIENCE!

There isn't anything special about this pandemic other than that it's going to be the one that sets the bar for future government overreach. You all should remember what paper libertarians like Tyler were doing during it, i.e., running interference for it. The only thing missing is the obligatory passive-voice "mistakes were made" post on MR here around Father's Day.

Alex was far, far worse with his "And so it begins" crap.

The clinical model gives you very good numbers on the relative costs.
If the clinician tells you your covid ICU coast 200 grand, then you can compare that to a house, within the error range of our erratic Fed.

We don't have a clinical model, and that is where all the errors come from.

"it should be known upfront and also broadcast widely what kind of data is urgently needed."
I've heard epidemiologist openly state that the amount of testing we are doing is far too small to give any meaningful information. They have also pointed out that saying we need tests has not increased the amount of tests available, all BS from the government notwithstanding.

partly true. I haven’t heard from epidemiologists the caveat that widely different Ro are consistent with the known data.
That we need more tests (RT-PCR and aB ) has been clamored by everyone including economists ( Romer, Tabarrok on this blog).

"There are no straight answers."

If so, then "I'm not getting straight answers" is tautological, so how can you disagree?

And those who are making claims about straightness of some answers are lying.

If MR are experts in modeling, where is their COVID model?

You don't have to be an expert to realize that this:

In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)"

Is a huge red flag.

You are saying the models were written based on political actions by governors, businesses, and populations, et al weeks before the governors, businesses, and populations made decisions?

And you know the deaths and hospitalizations in June 2020 already?

I bought the original "totally wrong model prediction" The Limits to Growth and saw over and over all the right wing claims about what it predicted, which I could readily check the short book to see no such prediction existed.

But most important, the book argued for radical changes in the extraction and use of resources, something I helped at te one of a hundred million change agents did by small amounts weekly.

For example, those liars were arguing for continuous increases in oil production in the US if the GOP got power, but the result was continuous decline from 1986 to 2006, and then under the anti oil Democrats and Obama, all the GOP driven oil production declines were reversed so the US returned to peak production last seen in 1970.

Let's look at the right-wing, conservative, GOP, and Trump predictions.

"You are saying "

I'm not saying any of that. Go take your meds mulp.

This is childish. Of course you don't need a model to ask questions about modeling.

True. But if you made your own model, you would be more than a dilettante.

MR/Mercatus/TC have the resources to do serious things like improving testing and data gathering efforts, collaborating with epidemiologists, generating open source models, and going on the record with a better forecast. Not to mention, they can study our options to improve the supply chain of masks and other PPE. Instead, they are choosing tribalism and mood affiliation by stirring up unnecessary antagonism towards epidemiologists, which is very disappointing.

Thank you for this post and for your recent calls for more nuanced coronavirus debate.

Some in the mainstream worry that discussion of sub-apocalyptic outcomes will drive reckless decision making. I think just the opposite is true. Let me quote a comment by tg that sums up the position well (and perhaps deserves to be signal boosted, lol):

"This [less apocalyptic] model doesn't actually argue for 'let it rip' behavior; indeed it depends a bit on the parameters but if anything it argues for the opposite.

It's the standard models that actually support "let it rip" more strongly because restrictions don't have that strong of effect on the end total number of people infected; it just takes longer. So you either drive it down to (near) extinction; justify it all on keeping hospitals operational and saving lives that way; or as a delay for better treatments.

Here, in addition, early restrictive action limits the spread while the people most likely to spread it widely (being themselves among the early ones affected) have it, and since R falls more quickly over time with this model that leads to a more substantial reduction in the total number of people infected. It argues more that policies should start very restrictive then ease up slowly over time (though the exact policies and how they interact with various populations matter)."

That's from your epidemiology-and-selection-problems post.

The Oxford paper 2 weeks ago makes the same point as the Atkeson paper. Which is, the world (the data) would look the same if COVID actually spread far faster (but was far less deadly) than we previously expected.

The IHME model, seemingly the gold standard for policy makers in the US, has been absolute garbage. They predicted 140k hospitalized in NY at one time, but NY will likely peak around 20k hospitalized at peak. Keep in mind the IHME model was released after, and took into account social distancing and the lockdown measures. Before COVID hit NY had 50k hospital beds available, so NY won't even come close to capacity. Cuomo has repeatedly said that no one has been refused care (including a ventilator) due to shortages in equipment, beds, or people.

IHME had been tracking deaths fairly well, but I'm expecting deaths to come in well below their median projection in the end.

It looks like deaths in New York peaked a few days ago (though I'm waiting to see if there's an uptick on Tuesday as death reporting has tended to be low on weekends, plus the Easter holiday).

But since NY hospitalizations and intubations have seemingly peaked, and intubations frequently resolve with deaths, it does look like NY is past the worst of it.

To clarify, "IHME had been tracking deaths fairly well" in *total* for the US. As the paper notes, at the state level the predictions have been useless. And I expect the actual number of US deaths in total to start diverging from the model.

There was a paper highlighted by the Economist last week. The authors compared Influenza like illnesses (ILI) this year vs last year. These are reported from doctor’s visits all over the US. They showed ILI were much higher this year but flu positive test were not.
They attribute the discrepancy to excess unreported Covid infections. They find that the detected portion is only 1/100 to 1/1000 of the total infections. It’s hard for me to believe it is so little. Iceland DeCode showed 0.86% infected and 50% asymptomatic
A large silent undetected cohort means Covid-19 is more infectious but less lethal, and that severe cases would happen more or less in a short period at instead of spread out over months. It makes sense to me that Covid-19 should be more infectious than the flu as they are both respiratory viruses but we have immunity and a vaccine against the flu slowing its spread, not so for Covid-19
The paper has the merit of highlighting this potential large silent infected population

https://www.medrxiv.org/content/10.1101/2020.04.01.20050542v1

My wife and I probably had it. We will never know because, being asymptomatic, we can't get tested. She was told by her doctor she had the flu. I didn't go to the doctor. Repeat that story 20 million times and you'll get a pretty good idea of the data challenge we're up against.

Wait for the antibody test. If you are asymptomatic the test accuracy is pretty poor.

> They showed ILI were much higher this year but flu positive test were not.

Are there more hypochondriacs this year than last year?

"the detected portion is only 1/100 to 1/1000 of the total infections."

That would mean that in NY, 100%-1,000% of all residents in the state have been infected with coronavirus

I never understood where the 100,000-200,000 US deaths came from. Did they specify a timeframe?

It seems foolhardy to me to make estimates with a 6-month or longer timeframe.

What we do know, however, is the curtain will come down on the first wave in the next few weeks. At that point (end of April), I expect US deaths to be 35-40K, certainly less than 50K.

After that, the tricky part (unless Sweden is successful).

From deriving numbers from limited data and plugging it in standard models which are validated in nature. Eg, animals have epidemics, so plenty of cases exist where data is well collected, like the spread of hoof and mouth, many bird flu viruses, etc. Farmers and livestock operations do a lot of data collection.

Note, rocket companies model the process of getting things to and from orbit. Some write the models to fail so the design of the rocket system can be fixed to not fail. Eg, SpaceX. Others model rocket systems to succeed, for managers like Trump. Eg, the Lockheed run Boeing.

Your SWAG might be low or high, but you are modeling human effort from shoveling money to workers to thwart nature. You are clearly optimistic that Trump and Congress in the US, and governments globally will be both willing and able to shovel money to pay workers to thwart nature.

We're already at 34,000 deaths so your 35-40k estimate looks off. "Certainly less than 50k" seems in question as well.

The IHME model is a curve fitting exercise, and has already been criticized by numerous epidemiologists.

How many models do you know that aren't curve fitting? So what.

The IHME model is a curve fit that they extrapolate into the future. This is different from propagating a mechanistic or agent-based model into the future. Both can fail, but the failure modes are different. The IHME model extrapolates a curve fit into the future and makes assumptions about how fat the "tail" is, based upon past data, without any mechanistic understanding.

We don't generally blame generals because they "choose" to make decisions in the fog of war. We understand they've got to. The war is there, and imperfect data is all there is.

In the same way, I don't think it's reasonable to blame epidemiologists because they can't tease R0 out with high precision in the first weeks of a global pandemic.

I mean, just think about the extreme sensitivity there will be to initial conditions, in the base city with case zero, and then in every different location where the virus touches down.

You have your first cases, but are they shut-ins or super-spreaders? If you try to do a range, from all shut-ins to all super-spreaders would that have any meaning or use?

I think not. This is more like war than it is like the fed pondering an interest rate cut. You get your R0 as X +/- 2 and you have to make your plan with that.

What if they were right about the total cases but wrong on the severity? Is it their fault that testing is so far behind the curve. What if they had Ro right but were wrong about death totals because those most at risk were able to get out of the way? The severity of illness in health care workers is a major issue. Nursing homes are a problem. Prisons are a risk. But did financially secure at-risk groups find a way to isolate effectively? Lucas effect in that those at risk new they were at risk and took action. If gay men changed their sexual practices and reduce deaths from HIV did forecasters fail in their predictions of higher death totals?

BTW, I don't know, but did the minority community more closely track the worst expectations of the model? If so is that a proxy for what would have happened if some weren't able to buy protection from the pandemic.

"the minority community": eh? The what? Do you mean blacks? Central Americans and Mexicans? Hasidic Jews? What is this mysterious "community"?

Ah, the soft bigotry of low expectations. You mean poor people, not minority people don't you? Those aren't the same groups.

There isn't even a single "they" here.

Those that win wars, or aspire to do so in future, do question why generals chose a particular strategy, and whether it was a good one.

"We can run wild for six months or maybe a year, but after that I have utterly no confidence." so Yamamoto warned the generals of Japan. They still went hard and went early and went all out in an opening offensive and... lost. But at least those that came after could question whether again whether that was the right choice. That there were thing they did not know, did not absolve them.

But that's not modeling either. That's war-gaming, and history-teaching.

Taiwan didn't have a better model. They took their history and preparation seriously.

Just read on Instapundit: "BELGIUM’S WUHAN CORONAVIRUS DEATH RATE SURPASSES ITALY’S: “In the meantime, neighboring Netherlands, which a few weeks ago had a similar rate to Belgium’s, now has half of Belgium’s."

So who know if Taiwan did anything right? We have no idea why certain people or groups are more affected than others. That's the scary part. Seems for every claim that can be made, there's a counterfactual.

I think the interesting thing is that Taiwan didn't try just one or two things, to be right or wrong.

"Taiwan rapidly produced and implemented a list of at least 124 action items in the past five weeks to protect public health," report co-author Jason Wang, a Taiwanese doctor and associate professor of pediatrics at Stanford Medicine, said in a statement.

https://www.cnn.com/2020/04/04/asia/taiwan-coronavirus-response-who-intl-hnk/index.html

I believe you are entering a bit of distinction without a difference here. (Or else you've simply abandoned your epis = generals analogy).

The central point (which you've shifted away from) is that yes, you look critically at what has been done and people advising you on that course and the assumptions they made, despite unknowns.

Another example of creative editing for a post.

Read the second to the last paragraph in the conclusion section of the first paper:

1. Author doesn't take a stand on which model is correct.
2. Author acknowledges his model is simplistic and doesn't take into account deaths from health system overload or mitigation efforts
3. Author says that other epidemiological models that may better predict Rnought and the spread better than his simple model.

Why wasn't the concluding paragraph posted, and how many people read the paper, much less the conclusion.

The main point is that the SEIRD models have too many variables and these are hard to estimate from deaths and CFR and often are just guesswork because the data is incomplete and confusing and the true number of infections is unknown. .
Widely different Ro can give similar results short term but become very discrepant long terms.
It shows adding empirical exogenous data such as from serological testing significantly reduces the uncertainty in the model.

So.
Flip a coin?
Or
Work with what you have
And
Refine as you go along.

These models aren’t complicated. How people behave in response to an impending epidemic is. I discouraged handshaking a week before school closures because the Premier League did. We didn’t shake hands at church either and canceled midweek services over a month ago. I was shocked that a typically crowded restaurant was only half full (typically full) before takeout only was mandated. My wife has worked from home over a month. Sacrifices like these among the population are why estimates are being revised lower.

So .. as Tyler worries about the credentials of our epidemiologists, there is apparently a newly created and official governmental Council to Reopen America.

Seated upon it are both Ivanka Trump and Jared Kushner. Not just one. The both.

America as shitshow.

Derailment bait

Whoa there, buddy. We don't generally blame generals because they "choose" to make decisions in the fog of war. We understand they've got to.

Well played

My current mood is that if "the rational community" had lived up to their name, we would not be in this extreme of a mess.

We would have a semi-rational president capable of grasping semi-rational plans.

"generals" lol, as if insipid nepotism is "the real meritocracy."

"derailment." lol, as if noticing that same insipid nepotism is the problem. Not the insipid nepotism itself.

What is wrong with you people?

Who are you to criticize the GEOTUS? His actions have cut the death toll from the Wuhan flu by two orders of magnitude, according to the models! And just think, all those rationalists were claiming that banning flights from China was racist just two short months ago.

Are you now ex-libertarians, for absolute government power, delegated to daughters and son-in-laws? Tribal chieftain ftw?

"When somebody is the president of the United States, the authority is total" - President Donald Trump, today.

I can't think that your 2005 selves would be happy with what you've become.

Indeed, my 2005 self was going through my libertarian phase in college. (It's a time for experimentation.) Regrettably, even then I was much older than when most realize that libertarianism is an inherently self-defeating ideology.

Nonetheless, I have more faith in Ivanka Trump to run the country than I do the army of blue-checkmark midwits who have been armchair quarterbacking their way through this whole manufactured crisis without once admitting that their terminal TDS has them doing about-faces on a daily basis in spite of their affected rationalist mindset. That includes you, not sure if you realize that comments on this blog don't get deleted after two days. You've all been made to look like fools for almost four years now, the notion that you ought to be regarded as a voice of reason and sanity is delusional in the extreme.

"I have more faith in Ivanka Trump to run the country than .."

I wonder if Slappy and JWatts understand that they've backed into this same corner.

I have the same faith that a magic 8 ball could run the country as well as any political candidate. But then again, I’m intelligent enough to understand that elected people don’t run countries in a democracy. It’s one of the perks.

You don't have any reason to believe that Ivanka is unfit to serve in a leadership position other than that you don't like her last name. (Who's irrational now?) She is, more objectively, an unremarkable choice, or by extension, a lazy choice on her father's part. The nepotism argument would carry a lot more water if the opposition weren't attempting to recruit the wife of a former POTUS to run for the second consecutive election because the guy who used his sons to make millions off his name is a couple pecans short of a pie.

What if there aren't models? Any model that captures the transmission dynamics of communicable disease will be extremely unstable by definition. Any parameter change in an unstable model will result in extreme differences in output.

All that the models tell us is that the spread can be controlled, but they don't tell us whether the results will offset the costs.

Either avoid the situation or expect the worst. The first is much cheaper.

All models are dealing with the problem of bad-data.

Globally, the our data collection system was optimized to detect the initial cases of emerging diseases. The design goal was to detect a novel disease with the potential to flare into a pandemic as early as possible. This was a good design choice, but meant sensitizing to handfuls of cases and deaths. This sensitization was achieved by creating a distributed network of tens-of-thousands of nodes world wide, any one of which could register a novel, aggressive disease from only a handful of cases. The system performed as designed, performed well, and is a true marvel of human acheivement.

But, because the design goal was to detect a few cases as early and locally as possible, no work was done to build infrastructure to simultaneously register a massive number of cases from a massive number of nodes.

Detecting COVID-19 cases, recoveries, and deaths all require labor. Reporting COVID-19 cases, recoveries and deaths also requires labor. The infrastructure necessary to support rapid data collection and reporting at scale is simply not available. As a consequence, the individual nodes in our disease detection infrastructure are being overwhelmed, and simply reporting the maximum number of cases they are capable of reporting.

This has been apparent in the death statistics internationally as well as domestically. For instance, Iran appears to have the ability to report ~125 COVID-19 deaths daily. It seems improbable that deaths grew quickly to around 125 deaths per day, and then remained stable at that level for a month. This would imply a reproduction rate reaching stability at approximately 1, which is unlikely. Nor is this phenomenon limited to the Iranian regime. France appears to have the ability to report ~500 deaths daily from its hospitals, with nursing homes reporting deaths in batches, and aperiodically.

Domestically, New York appears to have the ability to report around 750 deaths daily from its hospitals. Anecdotally, New York estimates something on the order of 180 reportable COVID-19 deaths occurring in homes or on the street. Georgia appears to be reporting less reliably than Iran, with deaths swinging up to 100 in the middle of last week, and then cycling down to 10 yesterday. Put another way, reported deaths on Easter Sunday at the American epicenter of New York remained around 750, in line with their recent reports. Nationally, however, deaths declined to around 1,500 from around 2,000 just two-days before. Does anyone really believe everywhere in America but New York is gaining ground against COVID-19? No. It is far more likely that the Easter weekend depressed the ability of most American states to report deaths.

As a civilization, we don’t have a detection and reporting system in place that allows us to track rapidly moving pandemics at scale. Korolev tries to cope with this is by looking at localities that can be randomly sampled, such as Iceland, in order to obtain good data. This is dangerous however, and may be little better than anecdote. Viruses attack via specific pathways. Whether a virion is able to bind to a victim cell is dependent on the vagaries of molecular machinery. How well a virion is able to disguise cells it has infected from our macrophages is also dependent on molecular scale effects. Relying on Iceland for guidance is dangerous, because Iceland is a small population that has not mixed much with the rest of humanity for the past 1,000 years. It is entirely possible some COVID-19 protective mutation propagated itself across the Icelandic population, and thereby renders Icelandic disease progression uninstructive for the rest of humanity.

At the outset of the Chernobyl disaster, all the dosimeters registered around 3.6 roentgens per hour, well below the fatal dose. But of course, the dosimeters weren’t made to detect fatal doses, they were made to be exquisitely sensitive to any heightened levels of radiation. Because of this, the dosimeters weren’t really reporting 3.6R/h, the dosimeters were reporting that radiation levels were higher than what they could detect.

We’re in the same position today. Our detection and reporting system is simply not built to track a rapidly moving pandemic at scale. Until we construct a system capable of detecting and reporting at scale, it is foolish to use precision modeling tools to guide our decision making.

Presently, the lower-bound on COVID-19 is that we’re suffering a pandemic that is worse than “the hundred-year flood.” Because the bodies haven’t started piling up in the streets yet, we can now be cautiously optimistic the upper-bound on COVID-19 is that we’re suffering a pandemic that’s less bad than the thousand-year flood.

We can’t say much more yet.

Please accept a pat on the back for that.

I tend to say that I suspect such-and-such is true but that I hold that belief with low confidence. Then people demand to know why my confidence is low. "Because it's a novel virus, you nitwit" is what I think. What I say may lack the last two words. Maybe it shouldn't?

+1 for many reasons. I am concerned, though, that by your effort to be cogent, you have mistaken this blog's comments for a place where priors are re-thought.

That's good stuff, as in pointing out specifically what in the data is obviously garbage and has to be fixed before anything useful can be gleaned out any 'data' one might have.

I'd say we are past the point where expecting anyone to come up with anything that might help with making a decision such as to when to end the quarantining and the like is realistic. It's going to be feel one's way and hope for the best.

I'd say that possibly a big debrief at the end of it, and a good look at the data generation problems with an eye to doing better next time, would be in order.

The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)
----
Why does your abstract tree sometimes diverge? This is what the major new math paper addresses.

“ I do feel I am not getting straight answers.”

You should try asking economists questions sometimes. They always provide concise and consistent answers. ;)

Just linking a good twitter thread on these models: https://twitter.com/gro_tsen/status/1249738589396766720

Tyler's vendetta against epidemiology during a global pandemic is doing serious damage to the reputation of this blog. Posting a random anecdote from someone claiming epidemiologists aren't as smart as engineers, and then concern trolling about wanting a positive response, is breathtakingly childish. My PhD is in physics, and I now work in quantitative biology, I know both epidemiologists and engineers, and they are both comparably smart and mathematically sophisticated (by the way, don't ask me what my physics advisors would have said if I had decided to go into economics).

As far as positive responses go, the epidemiologists I know are too busy doing actual constructive work to write blog posts or mouth off on Twitter about GRE scores. Good for them, I say.

I disagree. Epidemiology is particularly relevant during a pandemic. Science does not and cannot become immune to criticism when it is most relevant. Decision makers around the world have made hugely consequential decisions affecting hundreds of millions of people based on these models. It is a service to science and public discourse to ask these questions now as we, as society, debate who to listen to.

The enormous amount of push-back strikes me as very odd for scientists. If epidemiologists deserve this "special protection" why not politicians or central bankers? A lot of people who now protest that all epidemiologists and public health officials should be immune from scientific debate have and had no qualms to pass immediate judgment on others....

https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/

Scott Alexander:
"So the motte-and-bailey doctrine is when you make a bold, controversial statement. Then when somebody challenges you, you claim you were just making an obvious, uncontroversial statement, so you are clearly right and they are silly for challenging you. Then when the argument is over you go back to making the bold, controversial statement."

Motte: "Science does not and cannot become immune to criticism when it is most relevant."

Bailey, Tyler Cowen:
"
a. As a class of scientists, how much are epidemiologists paid? Is good or bad news better for their salaries?
b. How smart are they? What are their average GRE scores?
c. Are they hired into thick, liquid academic and institutional markets? And how meritocratic are those markets?
d. What is their overall track record on predictions, whether before or during this crisis?
e. On average, what is the political orientation of epidemiologists? And compared to other academics? Which social welfare function do they use when they make non-trivial recommendations?
f. We know, from economics, that if you are a French economist, being a Frenchman predicts your political views better than does being an economist (there is an old MR post on this somewhere). Is there a comparable phenomenon in epidemiology?
g. How well do they understand how to model uncertainty of forecasts, relative to say what a top econometrician would know?
h. Are there “zombie epidemiologists” in the manner that Paul Krugman charges there are “zombie economists”? If so, what do you have to do to earn that designation? And are the zombies sometimes right, or right on some issues? How meta-rational are those who allege zombie-ism?
i. How many of them have studied Philip Tetlock’s work on forecasting?
"

Bailey, Anonymous Commenter:
"How smart are epidemiologists?
The quantitative modelers are generally much smarter than the people performing contact tracing or qualitative epidemiology studies. However, if I’m being completely honest, their intelligence is probably lower than the average engineering professor – and certainly below that of mathematicians and statisticians.

Do epidemiologists perform uncertainty quantification?
They seem to play around with tools like the ensemble Kalman filter (found in weather forecasting) and stochastic differential equations, but it’s fair to say that mechanical engineers are much better at accounting for uncertainty (especially in parameters and boundary conditions) in their simulations than epidemiologists. By extension, that probably means that econometricians are better too."

I don't get it. Maybe my GRE scores aren't high enough either.

Ok, I'll explain. TC's recent posts imply that epidemiologists are not as smart as economists, and that their methods are not as well developed or reliable. That is the "bailey" (the real claim he is making, which paints all epidemiologists with a broad brush). Then, when TC is criticized, he will retreat to his "motte" and say "I honestly don't know if epidemiologists are smarter or dumber than economists. I just want to have a scientific debate about the models." As a rhetorical strategy, this is clever. But for people who can see through it, such as myself and OP Phil, it is a major self-inflicted blow to TC's credibility. It ends up being such a huge distraction from the discussion of the models, that it throws into question whether it's even worth reading TC's comments about the models themselves, since he will inevitably conclude that economists know better.

I believe you are mis-characterizing (maybe willfully). The debate was and is always about the models. There is likely a strong correlation between intelligence and quality of scientific output, or do you believe otherwise? It seems YOU are unwilling to talk about the models and resort to deflection instead. Tyler talks about the models, first and foremost. You (and Phil) talk about Tyler claiming he is "doing serious damage to his reputation" by asking critical questions about other scientists' output.

At their core, all models contain the biases of their creators. The creators believe X, Y and Z matter a lot, but in reality, X might matter a lot, but S and T matter far more than Y and Z and weren't considered at all.

If you want to see how badly biases matter, just look at global warming models. In a perfect world, you'd expect 50% of the models guessed high, and 50% guess low relative to what actually happened.

But in practice, 99% of models in the last few decades have dramatically overestimated warming. What does that tell you? It tells you that the people working on global warming models really believe it's a problem, and their biases tend to overestimate what is coming by a lot. And that shouldn't be a surprise: If they thought global warming was no big deal, they'd likely not have gone into global warming studies.

Same with an epidemiologist: They went into that field because for the last 50 years they've really enjoyed the stories and movies about outbreaks and pandemics, and those have fueled their view of the profession, their perceived import of the profession, etc.

Is it any wonder when we get an actual pandemic that they WAY overstate what will happen? This is their life's work.

1. Tyler has an entire post where he questions the qualifications of epidemiologists
2. You say that for Tyler, it's all about the models
3. Clearly you're incorrect, see #1
4. I'm happy to talk about the models
5. A little intelligence is enough to make you a into dangerous modeler. Experience and domain-specific knowledge are needed to become a humble modeler. Better data are needed to become an accurate modeler. Usually, in the real world, better data are the limiting factor.
6. As for the models themselves, it's painful to see Tyler blogging about a junior economist rediscovering things that epidemiologists already know about their models
7. This is quite embarrassing for the blog and for Tyler. He could just read actual epidemiology papers and blog about those instead

Totally agree.
This is an effort to raise status of economists over those who study epidemics.
Next,
We will try to raise status of economists
Over car mechanic
With respect to car repair.

Here's my post along these lines, inspired by some MR comments:

Does the Spread of Tom Hanks Disease Inevitably Slow as We Run Out of Tom Hankses?

But … perhaps there is in this case something epidemiologically different about popular people like Tom Hanks?

First, I want to thank everybody who has already fired off a comment to the effect that “Tom Hanks is NOT popular with me.” Keep doing what you do. Don’t ever change.

Second, I suspect the epidemiologists and public health experts got blindsided by a disease that, by its take-off phase of community spread, was transmitted most not by the Marginalized and the Vulnerable (as was expected, judging by all the February op-eds about how racism is, as always, the real menace), but among the respectable.

To both the Establishment and its handful of unpopular critics, the next epidemic would of course spread from the fringes of society.

So, in a mental atmosphere that has only gotten Woker in the six years since Ebola, it’s not surprising that few were thinking about how to model a pandemic that early on tends to be most prevalent among people whom many other people want to shake their hands.

Epidemiological models tend to assume that different germs have different rates of transmissibility for biological reasons (e.g., measles very high, the novel coronavirus quite high, flu fairly moderate), which can then be altered by changed policies, such as social distancing.

But most models assume that human beings are fungible. This is partly due to concerns about admitting the existence of racial differences, which would be the Worst Thing Ever, and partly due to the need to simplify.

But what if it turned out that race was a minor factor, but popularity was crucially important? In this perspective, Tom Hanks, Idris Elba, Prince Charles, Boris Johnson, Kevin Durant, Chris Cuomo, and Rand Paul are rather like each other on the most medically relevant dimension that on, say, March 1, lots of people wanted to shake their hand.

But what happens to the spread rate of the disease as the most popular in society either get it or hunker down? Does it continue to spread as rapidly as it gets down to the ho-hum bulk of the population?

https://www.unz.com/isteve/does-the-r0-for-tom-hanks-disease-inevitably-fall-as-we-run-out-of-tom-hankses/

"But most models assume that human beings are fungible."

The concept of super-spreaders has been around for a while. A quick google search of the phrase "SARS super-spreader" will show this, for instance. The unfortunate reality is that, once you run out Tom Hankses, the next most potent group of super-spreaders are people who work at hospitals and medical clinics and the patients confined there. Hence the advice to stay home as long as possible even if you think you might be infected with covid-19 before seeing the doctor.

The epidemiological model depends on hospital data, the hospitals are overwhelmed in a pandemic, a tautology. Ex post the models work much better.

The better approach is to examine conditions when the hospital chain is sustainable. Hence the clinical data, which treatments lead to sustainable hospitals. The docs are in essence treat the sick hospitals. The sustainable condition is when the hospital have the respiratory distress syndrome solved. At that pint we know hospital capacity against which the population can adapt.

Ho does the population adapt? The hospital capacity is known w=once the clinical steps are known. Populations will prepare to avoid the wait in line. They will identify the susceptible, break hem into managable groups for monitoring such that hospital capacity is not overloaded.

The disease manageable when hospitals sustainable.

We can identify some proofs about abstract tree and economics. We need free entry and exit into congested queues, the right to skip long lines. A sustainable hospital chain allows us to measure the value of preparedness and avoidance, we observe the hospital queue if we get the flare up.

1. The Korolev paper does not show anything new that good epidemiological modelers do not already know.

2. The IHME model is garbage, and the vast majority of epidemiological modelers despise it. The only reason it is getting so much attention is because the White House likes that its numbers are much lower than other models and because they have an army of workers who can quickly produce results and nice graphics. It is not a good representation of the field of epidemiological modeling as a whole.

So perhaps there is a difference between "medical" epidemiology and "behavioral epidemiology"; the statistical constraints being more lenient on the latter because it doesn't involve morbidity or mortality, per se... ?

Thank you for that point.

How many "good" epidemiologists are there? 95%? 5%?

It depends on who you include in the denominator. Among active researchers in infectious disease modeling prior to COVID-19, at least 95% already know the points made by Korolev or would figure them out very quickly.

As bad as IHFE is at least it learns from its mistakes. The agent models didn't and were much worse in terms of error and hysteria - even in the facr of tons of conflicting evidence. There is no doubt by people who study risk forecasting that over-estimating risk is the rational idea....for the risk forecaster.

Since epidemioligists whiffed massivley on this - what is a good representation of the field exactly?

Can you explain what you mean? In what way did epidemiologists as a group whiff massively on this? I'll give my 2 cents, which is that the experts probably should have become more hysterical a bit sooner, like when China locked down Hubei.

Epidemiological modelers can be a bit gun-shy on sounding alarms. This is especially true in the wake of the controversy around Martin Meltzer’s projections of the Ebola epidemic. No one wants to be mocked as the next Meltzer for crying wolf about a new emerging disease. https://www.seattletimes.com/nation-world/cdcs-overblown-estimate-of-ebola-outbreak-draws-criticism/

The mocking has just begun and it will be fierce. And justifiably so. Being off by a factor of 1000 is not because of social distancing being a big victory. It's because you guys either don't know what you are doing or your incentives are wildly skewed. The fact that you think your cohort is "gun shy" is truly comical.

Asking for a predictive model is clearly a much taller order than asking for an indicative model. Especially when you factor in sociological factors that a compartmental model is not going to build in with any sort of ease. You can get away with compartmental models about, say, measles, and build a predictive model that gives lots of subtle stylized facts regarding things like seasonality based on just knowing a few simple facts like much higher rates of contact between children during the school year. You would expect predicting corona to be much harder because to list just two reasons, the data is poorer and societies respond in very complicated ways all at once. You are facing a situation where relevant variables are large compared to available data. So I think it's good to cut the majority of epidemiologists slack on this. Most of them are *not* making bold unjustified claims (some of the stuff like this IHME model are giving predictions as you would expect -- and through a curve fitting exercise which many epidemiologists have criticized).

If epidemiologists' predictive models have been poor, that does not mean indicative models are poor, and they can be and have been good indicators as to which interventions have good chances of success.

FWIW I think a number of points in Tyler's original post were interesting and the analogy with macro was also interesting. But at this point I don't see what he hopes to gain by kicking this hornet's nest by attacking epidemiologists' status except animosity. It would be quite easy to reword these posts to play humble, so I think he does want to spark outrage to some extent. Perhaps he thinks there will be some public backlash in the coming months but is looking to test the waters to see how people respond to a rude challenge to their authority.

Yes, it was Jan 23 or thereabout that the shutdown occurred. And it was bracketed by Jan 18 the gov of China encouraging everyone to attend a massive potluck in Wuhan.

World governments need a clear signal that they send--Level 1, Level 2, Level 3. Failure to report accurately would mean massive trade penalties. And governments need an immediate mechanical response to changes in levels.

In hindsight, China knew this was SARS in December AND they knew it was moving human to human. Those two events alone are very rare--maybe once a decade or two do they happen. In December, China should have closed borders to outgoing flights. And anyone that arrived from China should have been moved to quarantine.

I don't think it's a particularly surprising revelation that the long run forecasts can diverge so broadly given observably identical initial conditions. The surprising aspect to me was how decisively we hacked away at the economy, civil rights, etc. in the face of such broad uncertainty. See Ioannidis from March:

https://www.statnews.com/2020/03/17/a-fiasco-in-the-making-as-the-coronavirus-pandemic-takes-hold-we-are-making-decisions-without-reliable-data/

Jonathan Haidt's critique of contemporary "safetyism"--a culture or belief system in which safety (which includes "emotional safety") has become a sacred value, which means that people become unwilling to make trade-offs demanded by other practical and moral concerns--is relevant here. Would our societies/leaders have so completely closed down 20-years ago? Or even 10-years ago?

I'm also surprised Robert. Even Tyler, who is occasionally mildly heterodox on public policy issues, after pointing out some serious weaknesses in these models, can't help but disclaim that he "fully supports" the current lockdown (and the resulting global economic depression). What kind of reasoning is that?

IMHO, this saga is largely a silly status event for journalists and academics, with the typical thought policing that those groups specialize in. It's difficult to find any mainstream skeptical take on the models, the wisdom of the lockdown, etc. I do what I'm told and have a cushy federal job so the coronavirus is largely a paid vacation for me, but I think it's ridiculous to see supposed thinkers like Tyler and Alex promote a total global shutdown based on some stupid models.

Uncertain models necessitate overreaction. Especially where the downside can be very bad.
Nassim Taleb has a concise take on this:
https://mobile.twitter.com/nntaleb/status/1239243622916259841

"And just to be clear (again), I fully support current lockdown efforts [...], I don’t want Fauci to be fired, and I don’t think economists are necessarily better forecasters. I do feel I am not getting straight answers."

Interesting about TC's psychology. When I don't get straight answers to questions I ask about a claim that someone made, my distrust of the claim rises rapidly, and I may even become hostile to the claimer. My reaction is often stronger than what would be rational from the point of view of a Bayesian analysis.

I admire TC to be able to stay rational in these circonstances. In short he says "We have been served tons of BS to justify the current policies about Covid-19. This doesn't change my personal opinion about these policies".
He is less a contrarian than he often appears to be (or than I am, for that matter).

Couldn’t it be the epidemiology has just never been this important before at least in modern memory? Economics has gone through multiple downturns and been subject to lots of scrutiny as a result. Also Tyler, DC Dills is a lot better than No1 Sons. Just saying

Tyler says:
"I do feel I am not getting straight answers."

Answering as a scientist, engineer, former test engineer, data analyzer:

But what price are you willing to pay?

Tanstaafl

As a scientist I advocate public funding that pays workers to build knowledge capital which requires paying workers to build data collection capital and paying workers to run the capital, and then pay workers to analyze the data.

For example the billions paid for Hubble and CERN related payrolls, and the hundreds of millions NOT paid for the SSC NOT built and operated in Texas. https://en.m.wikipedia.org/wiki/Superconducting_Super_Collider

I find it interesting that economists never call for paying workers to collect data about economic activity. Or paying workers to collect data to understand how workers should be paid to work producing outcome the economist advocate, or collect data about why workers do not get paid to produced good outcomes in the US while they are paid in China, Asia, Europe.

Why do economists not advocate paying workers to work?

Because economists fo not want to pay workers?

The models are fine, but "The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries" points out to the fact that R(o) or more relevantly the effective rate Re in all these types of equations is a purely "social construct" and can vary with time. This term isn't a virus dependent constant even if the term of often called basic reproductive rate. It isn't a basic property of the virus but a property of the interaction between the social behavior in the culture and the virus and how that is changing with time.

Just like "social distance" isn't physical distance but related to the probability of transfer upon contact. If everyone was in a personal protective equipment (PPE) bubble the social distance would be infinite.

With protection and just common sense biosecurity plans, making Re < 1 the virus goes away.

https://www.dropbox.com/s/lvmhxl4nyybprsi/Protection%20in%20the%20Covid-19%20era%20V4.pdf?dl=0

The epidemiological models make wrong predictions, for sure, but that's mostly beside the point. They're right in driving home the two main points. First, left unchecked, the spread of the disease is exponential. Today, this seems trivial. A month ago, though, neither the public nor policymakers seemed to appreciate this, at all. Now they do. Thank God for the epidemiologists. More important, though, as shown by this discussion, the wrong models drive home the right point that, with the spread of disease, the errors--the risks to human life--are exponential. From a policy perspective, the lesson is caution, both with the models and their critics. And this lesson seems to have been learned too, at least by many governors and mayors.

Can any economic model can claim so much success in so short a time?

"The models are utter crap . . . so of course I support the policy those models brought about."

It must suck to have to always reassure peers that you're not engaged in wrongthink.

BUT . . .

I think in general you're doing a great job on your posts. I don't blame you for having to put in those ridiculous disclaimers. Academia is what it is.

They should get some finance quants/traders involved. Using imperfect, complicated models to make decisions with huge consequences is EXACTLY what derivatives trading is about.
Wall st. models get tested against reality hundreds/millions of times everyday, and their modellers/traders have much better intuition as to what kind of models are good in practice and aid decision making.

The stock market is already doing this. It has been surprisingly resilient considering all the apocalyptic predictions. My advice is always to follow the money.

Well well well.

Conservatives have gelled on the line of complaint that (1) attacks forecasting science, (2) vindicates Trump's inaction, (3) provides cover for their stock market-over-old-people Sophie's choice, (4) allows them to pretend it's liberals who are the real authoritarians, and (5) opens the door for return to the Democratic hoax meme during the election.

And all they had to do was to pretend it was easy to forecast this thing back when it first started, pretended that the forecasters claimed a degree of accuracy that they never actually claimed, and step over the obvious self-fulfilling nature of the quarantines which were literally unprecedented (and unexpected) in scope.

Once the serious adults took over from the wingnuts and acted like adults, It became clear that (1) we would succeed in significantly slowing the spread, and (2) the wingnuts would turn that success against us. So it goes when a nation is saddled with narcissistic sociopaths in leadership.

1,2 Can you define inaction? I think inaction would require Trump not to have been quicker to take action than almost the entire Western World.

3. can you define this choice and how it was conservative's choosing?

4. Liberals tend to be Republicans or Libertarians in the US, the Democrats are progressives which is VASTLY different from Liberal.

5. Realistically, the Democrats were never going to stop the steady stream of stories that are less truth than fiction.

You likely are just trolling, but these questions are worth asking when you make wild assertions.

Some years back I worked for a company that had a lot of meteorologists on staff, so I spent some time learning about weather forecasting. Apparently, there is an entire business of private weather forecasting for companies subject to the vagaries of the weather. They were in fields like mining, shipping, agriculture, construction, aviation, electrical power and tourism, and they hired relatively expensive meteorological consultants or had meteorologists on staff rather than just tuning in to the weather channel or checking the newspaper because knowing what the weather was likely to be in the future meant they could make more money.

They each wanted their own weather reports and had very specific concerns. Electrical power companies wanted to know how much power they would need to generate and when. Getting it wrong meant having to buy spot power at insane prices, even without Enron. Construction companies wanted to know the weather at their active sites since some materials like concrete and dirt are sensitive to temperature and humidity. Airlines wanted to know heat and air pressure to tell how much their planes could carry and they wanted to know about winds and storms to know how much fuel they would need and how stressed their schedules would be.

There was lots of stuff they didn't care about and for some things they wanted the best case and for others the worst case. They still think about the poor schnook who had to do the forecasts for the D-Day invasion whenever their job starts to seem stressful.

The real question about the IHME model is not how accurate it is, but how useful it is. I get the impression that it has been moderately useful. Given that we don't actually know how many COVID cases or deaths there have been, and so much of our response to COVID has been social, and so subject to social effects, it has been at least as useful as the typical bespoke weather forecast.

When they built Denver Intl Airport, they stuck it out on the plains where wind shear is a risk. So they circled with al kinds of sensors. And then some forecaster has the job of deciding when to alter air traffic because it might be unsafe.

Getting that wrong means some airlines just blew an extra bazillion in fuel and schedule ripples effects. No pressure though. Not like D day.

I didn’t find that first paper very useful. As a theoretical exercise, it made what seemed like an interesting point but I have no idea (and I’m not sure he does either) of the probability that the CFR method he proposes would occur. More real world data and a review of work on CFR’s and general patterns of well being and mortality are needed.

I guess the scenario he proposed *could* happen, just as I *could* be the queen of England.

And this is why I’m not sure knowing which discipline is better at math is that helpful. You also have to have knowledge of vast literatures on the topic to know if the math lines up with the real world.

I meant the short paper on CFR, not the first paper.

The last paper should present the data differently. I wonder if the models are predicting smoothed averages overtime while the daily totals also involve a stochastic element, much like comparing rolling averages on stock returns to the daily returns. The daily returns will alternate being above and below the rolling average. But I can’t tell what’s going on given how the data are displayed. It would be odd for the model to try to predict a daily total anyway given the somewhat low N and stochastic nature of how and when new cases are reported. Again, I worry that some people who are generally good at math but don’t know the literature they’re dealing with are making some mistakes in their analyses, not that I also don’t worry about these epidemiologist models, too, given the newness of the virus and low quality data

Can someone explain why the Ikorolev paper, on Figure 3 "USA Death Fits", and elsewhere, shows black dots for "actual deaths" with values that have never been reached? (The US has never yet had a day with nearly 3000 deaths.)

Answering myself: it appears those points labeled "actual deaths" are of course not actual deaths in the real world but rather "actual deaths in the model". This is poorly worded, IMO, especially so since that graph shows several curves with different model parameter assumptions yet only one set of black points labeled "actual deaths", which I took to mean that the author was suggesting the red curve was the best set of model parameter assumptions because it most closely matched "actual deaths" in the real world. Another helpful change to improve clarity would be to note *on the graph* where we are on that curve, and of course to also plot *actual deaths*, as in *real-world-deaths-that-actually-occurred*.

Epistemic status: moderate confidence - I'm an anesthesiologist with an MPH (focused on epidemiology) and have read all of the relevant papers. But I'm not an epidemiologist.

Epistemic effort: moderate effort - this is a comment on an obscure economics blog. But I like the discussions and have no work to do since my state medical board forbade elective procedures.  

1) Prior to the publication of this Australian/Northwestern/UT critique, the IMHE noted that they were seeing strange day-to-day variance in death reporting that likely reflected changes in the reporting ability of the states rather than the true number of daily deaths. So they started reporting rolling three-day averages to smooth this out. The IHME's 95% confidence for daily deaths was probably pretty good. But they weren't being judged on daily deaths. They were being judged on the daily reporting of deaths. By giving a 3-day average, they likely fixed the problem. 

2) The second problem of decreasing accuracy with a shorter time horizon might be related to the first problem. The critique uses March 30 - April 2, which was Monday to Thursday. The IMHE noted that part of the reason they started using a 3-day average was that Sundays and Mondays seemed to have low levels of reporting (and from what I can tell this resulted in Tuesdays having higher levels). Unfortunately using 3/30-4/2 as the dates in the critique meant that almost every prediction from Table 1 would've been made on a Sunday, Monday or Tuesday, likely the three most unreliable days of the week. And Figure 2 is even more unfortunate. Every prediction was made on Sunday or Monday with the outcome only on Tuesday. Predicting the Tuesday outcome was easier on Sunday because it included fewer faulty days. 

We're losing the forest for the trees. We made policy changes based on the Imperial College paper's predictions about peak ICU bed usage, but it seems we've all given up on tracking that. So we're using deaths as a proxy. Both the IMHE and covidtracking.com have seemingly given up on following actual hospitalization and ICU usage. If anybody has these data the world would love to see them.

The IC paper's death estimates seem to be holding up well. It estimated in mid-March that UK deaths over the next 2 years with mitigation + suppression would be between 5k - 120k. The latest IMHE UK data has a point estimate of 37,494 (26,149 - 62,519) for the first wave. Current UK deaths - 12,107. 

And while the IMHE state-level data may have more than a 5% error rate even with the 3-day rolling correction, it seems that its national estimates are also holding up. While there has been lots of chatter about the IMHE point estimates changing, I don't see enough variation in the ranges to justify policy changes.

IMHE predictions for COVID deaths during the first wave (which seems to be until 8/4/20):
3/30: 82,141 (39,174 - 141,995)
3/31: 83,967 (36,614 - 152,582)
4/1: 93,765 (41,399 - 177,381)
4/2: 93,531 (39,966 - 177,866)
4/5: 81,766 (49,431 - 136,401)
4/7: 60,415 (31,221 - 126,703)
4/10: 61,545 (26,487 - 155,315)
4/13: 68,841 (30,188 - 175,965)  
The US currently has 25,603 deaths. 

The models have been mostly right, which probably bothers economists. And policy leaders have listened to the models, which I know bothers economists. 

The IHME model has an astonishing number of flaws. But reducing the spread until we get better testing and hospital capacity is still the right move. You probably have more knowledge and expertise than the rest of this comments section combined.

It's not fair to act as if the IHME model is the consensus among epidemiologists, when most clearly don't agree with it.

In fact, a major reason why the Trump Admin. has so clearly favoured it is precisely because it's so optimistic and projects undeserved accuracy. You can't blame epidemiologists for Trump selecting data that serve to make his response look "tremendous" and "very strong".

Comments for this post are closed