Here are some relevant criticisms from Soltani, Calo, and Bergstrom:
Studies suggest that people have on average about a dozen close contacts a day—incidents involving direct touch or a one-on-one conversation—yet even in the absence of social distancing measures the average infected person transmits to only 2 or 3 other people throughout the entire course of the disease. Fleeting interactions, such as crossing paths in the grocery store, will be substantially more common and substantially less likely to cause transmission. If the apps flag these lower-risk encounters as well, they will cast a wide net when reporting exposure. If they do not, they will miss a substantive fraction of transmission events. Because most exposures flagged by the apps will not lead to infection, many users will be instructed to self-quarantine even when they have not been infected. A person may put up with this once or twice, but after a few false alarms and the ensuing inconvenience of protracted self-isolation, we expect many will start to disregard the warnings.
At least as problematic is the issue of false negatives—instances where these apps will fail to flag individuals as potentially at risk even when they’ve encountered someone with the virus. Smartphone penetration in the United States remains at about 81 percent—meaning that even if we had 100 percent installation of these apps (which is extremely unlikely without mandatory policies in place), we would still only see a fraction of the total exposure events (65 percent according to Metcalf’s Law). Furthermore, people don’t always have their phones on them.
There is also a very real danger that these voluntary surveillance technologies will effectively become compulsory for any public and social engagement. Employers, retailers, or even policymakers can require that consumers display the results of their app before they are permitted to enter a grocery store, return back to work, or use public services—is as slowly becoming the norm in China, Hong Kong, and even being explored for visitors to Hawaii.
Taken with the false positive and “griefing” (intentionally crying wolf) issues outlined above, there is a real risk that these mobile-based apps can turn unaffected individuals into social pariahs, restricted from accessing public and private spaces or participating in social and economic activities. The likelihood that this will have a disparate impact on those already hardest hit by the pandemic is also high. Individuals living in densely populated neighborhoods and apartment buildings—characteristics that are also correlated to non-white and lower income communities—are likelier to experience incidences of false positives due their close proximity to one another.
In another study:
Nearly 3 in 5 Americans say they are either unable or unwilling to use the infection-alert system under development by Google and Apple, suggesting that it will be difficult to persuade enough people to use the app to make it effective against the coronavirus pandemic, a Washington Post–University of Maryland poll finds.
And here are skeptical remarks from Bruce Schneier.
I also have worried about how testing and liability law would interact. If the positive cases test as positive, it may be harder for businesses and schools to reopen, because they did not “do enough” to keep the positive cases out, or perhaps the businesses and the schools are the ones doing the testing in the first place. Whereas under a lower-testing “creative ambiguity” equilibrium, perhaps it is easier to think in terms of statistical rather than known lives lost, and to proceed with some generally beneficial activities, even though of course some positive cases will be walking through the doors.
I wonder if there also is a negative economic effect, over the longer haul, simply by making fear of the virus more focal in people’s minds. The plus of course is simply that contact tracing does in fact slow down the spread of the virus and allows resources to be allocated to individuals and areas of greatest need.