COVID Prevalence and the Difficult Statistics of Rare Events

In a post titled Defensive Gun Use and the Difficult Statistics of Rare Events I pointed out that it’s very easy to go wrong when estimating rare events.

Since defensive gun use is relatively uncommon under any reasonable scenario there are many more opportunities to miscode in a way that inflates defensive gun use than there are ways to miscode in a way that deflates defensive gun use.

Imagine, for example, that the true rate of defensive gun use is not 1% but .1%. At the same time, imagine that 1% of all people are liars. Thus, in a survey of 10,000 people, there will be 100 liars. On average, 99.9 (~100) of the liars will say that they used a gun defensively when they did not and .1 of the liars will say that they did not use a gun defensively when they did. Of the 9900 people who report truthfully, approximately 10 will report a defensive gun use and 9890 will report no defensive gun use. Adding it up, the survey will find a defensive gun use rate of approximately (100+10)/10000=1.1%, i.e. more than ten times higher than the actual rate of .1%!

Epidemiologist Trevor Bedford points out that a similar problem applies to tests of COVID-19 when prevalence is low. The recent Santa Clara study found a 1.5% rate of antibodies to COVID-19. The authors assume a false positive rate of just .005 and a false negative rate of ~.8. Thus, if you test 1000 individuals ~5 will show up as having antibodies when they actually don’t and x*.8 will show up as having antibodies when they actually do and since (5+x*.8)/1000=.015 then x=12.5 so the true rate is 12.5/1000=1.25%, thus the reported rate is pretty close to the true rate. (The authors then inflate their numbers up for population weighting which I am ignoring). On the other hand, suppose that the false positive rate is .015 which is still very low and not implausible then we can easily have ~15/1000=1.5% showing up as having antibodies to COVID when none of them in fact do, i.e. all of the result could be due to test error.

In other words, when the event is rare the potential error in the test can easily dominate the results of the test.

Addendum: For those playing at home, Bedford uses sensitivity and specificity while I am more used to thinking about false positive and false negative rates and I simplify the numbers slightly .8 instead of his .803 and so forth but the point is the same.

Comments

Comments for this post are closed