The economics of placebos for self-remitting diseases

Daniel Carpenter, who just wrote the very impressive FDA book, has an interesting paper on his home page:

I develop a simple stochastic model of inference and therapeutic utilization in the presence of placebo effects, when the underlying medical condition may be self-remitting. In the model, expectations generate a “felt” health state which can mimic the medically cured health state even when the treatment in question has no real curing power. This effect may be augmented by self-limitation of the medical condition for which the treatment is utilized. A human agent then applies Bayes’ rule to the felt history as if it were generated pharmacologically. A more sophisticated agent knows of placebo effects but does not know the precise extent to which they contribute to curing. I describe the bias that attends inference and the under – or overutilization of therapies under such a model. A central result of the model is that human placebo learning is generally subject to greater bias in estimating treatment efficacy when diseases are self-limiting. Human agents may commit several types of decision errors under placebo learning. They may continually choose a more costly (expensive, hazardous) treatment when a less costly one would work as well, or they may continually use inferior treatments for life-threatening illnesses. When diseases are self-limiting, both these types of error are more likely when the human agent has high initial beliefs about the treatment. Possible applications of the model include the patent medicine industry, the robustness of markets for herbal and nutritional supplements, and the contemporary stability of counterfeit drug operations.

Of course this applies a lot more broadly than to medicine.  It helps explain why people overuse and underuse "treatments" of many different kinds, including education.  Here is Dan Carpenter's page on fly fishing.


Does this explain anything about religion?

I am a fan of Dan Carpenter’s work. It is the best work about the performance/behavior of government programs being done now and some of the best ever done. What I like about this book is its use of real options analysis. Carpenter assumes that safety is a continuous-time Wiener process, with an absorbing barrier at zero. Carpenter further assumes that time and repeated drug trials will allow the FDA to learn. Finally, the value Carpenter assigns to the FDA’s aversion to adverse drug reactions drives the model’s optimal stopping point. This is a typical problem of administrative choice. Many, if not most, administrative choices can and probably should be modeled as learning problems.

But I am uncomfortable with his analytical strategy, which presumes a idiosyncratic objective function for the FDA: maximizing agency reputation. It would be better, I believe, to start with the presumption that enterprises maximize performance. At a minimum, that approach can provide us with a counterfactual against which the consequences of self-interest, conformity to political pressures, unthinking habit, or the stream of influences from their environment can be assessed. That is how we think about businesses. We usually start by assuming that they maximize the present values of the free cash flows thrown of by inventory turnover. We justify this starting place by referencing Fisher’s separation theorem and the fact that shareholders ought to be indifferent to non-systemic risk. Once we have established a satisfactory counterfactual, we might then wish to investigate the consequences of asymmetric information, path dependence, shifting internal coalitions, or bounded rationality.

In this case, we should take the FDA’s charge seriously: insuring the safety and efficacy of pharmaceuticals. Both can be measured in terms of quality-adjusted life years (QALY), a measure that combines health status and years of health. Using QALYs to measure safety and efficacy would have the analytical advantage of applying the same metric for benefits and costs, although it might make sense to weight them to reflect FDA’s emphasis on safety. Only then, should one turn to the secondary question of how much reputation or political influence adds to our understanding of the timing of FDA approvals or, perhaps, what does the timing of FDA approvals tell us about its reputational concerns or the effects of political influence. I am not denying that these concerns have intellectual bite. But, putting them first seems like a case of the tail wagging the dog.

Moreover, I think one might argue that, in general, government programs share a common purpose mitigating systemic risks, which implies an objective function for most that looks something like the following: maximize reliability, subject to some minimum service accomplishment or delivery constraint. This follows from the fact that the things that government does for us, the services it provides have one fundamental attribute: they are all things we depend upon to get by – the legal system, incapacitating criminals and fire protection, education, a transportation network, a social safety net, clean water, etc. Moreover, deferring their delivery is prohibitively costly; these services must be provided in real time. Given the ways in which their delivery has been organized, often but not always for good and sufficient reasons, this means their providers cannot be permitted to fail.

It has always seemed strange to me that where government programs are concerned my colleagues usually start with the presumption of failure rather than success. Unfortunately, if you cannot say what success looks like you cannot say very much about failure either.

I loved the unintentional [probably] irony of the juxtaposition of the last sentence in the original post. Is Tyler hypothesizing that I could catch just as many fish with a bare hook, and the fly is just a placebo?

In my case, sadly, it's probably true -- zero equals zero under almost all circumstances -- but it's still disquieting. Those flies are expensive.

Can we ask people at the end of a double-blind study whether they think they were in the placebo arm or the treatment arm? If they can predict this with high accuracy it would have implications for the results of the study.

Comments for this post are closed