Super spreader immunity vs. herd immunity, from Paul Seabright

Dear Tyler,

I’ve read the comments section of your post on herd immunity pretty carefully and a point nobody has yet brought out is the importance of variance in R0. Suppose that an average R0 of 2.72 is made up of a) a low spreader subset of 90% of the population with R0 of 0.8 and b) 10% of super spreaders with an R0 of 20.

If what makes super spreaders different from the rest is just some invisible genetic factor, then using the average R0 of 2.72 in simulations may be a good approximation, and relaxing social distancing after the first wave may indeed lead to a large second wave.

But if what makes super spreaders different is a behavioral characteristic that also makes them much more likely to be infected than the rest of the population during the first wave, then the effect of the first wave may be much more permanent than the average R0 of 2.72 can capture.

Suppose the first wave infects 5% of low spreaders and 50% of high spreaders. Then after the first wave the uninfected population consists of a much smaller proportion of super spreaders than before and R0 for that population drops dramatically (to 1.86 in this example).

More generally if there is variance in systematic individual characteristics that affect R0 (and not just chance factors particular to the first wave), then stopping the epidemic requires only that enough of the high R0 individuals acquire immunity. That may happen naturally in the first wave, or it might be something that policy could influence. We may soon be able to test this by looking for a second wave in China as restrictions are relaxed.

An even more general point is that, unlike in many other familiar contexts, inequality in R0 is really good news. It reduces the size of the set of individuals whose behavior you need to influence. The more inequality the better!


Comments for this post are closed