Mayank Gupta emails me:
The absolute number of false positives would rise dramatically under slightly inaccurate, broad surveillance testing. At least initially, the number of people going to the doctor to ask what to do would also rise. One can imagine if doctors truly flubbed and didn’t know how to advise patients accurately, a lot of individual patients would lose trust in the medical system (testing, doctors, or both). The consequence of this would be more resistance to health public policy measures in the future.
I see this as quite similar to what happened on three mile island. There was a clear utilitarian benefit to taking on some small amount of nuclear accident risk. The public was never taught how or why to internalize and cope with this risk. When the risk manifested, individuals saw the risk but not the public benefit and turned against nuclear. This change of public opinion was reflected in public policy after.
I’m sure there’s some mismatches in this analogy, but I’m using it to point out that in general I find thinking on the margin without thinking about the public’s ability to think on the margin can result in setbacks on the margin.
In an age where science is both more capable of solving social problems and more complex to understand, but the public has too much complex information to sort through, the central problem of governance seems to be how to solve public choice, without creating a monopoly on information.
To be clear, I do favor rapid testing, but it is worth giving this problem further thought.