Was RAND wrong?

by on October 18, 2007 at 2:58 pm in Economics, Medicine | Permalink

No, not Ayn Rand, the RAND experiment on health care.  The RAND experiment randomly assigned people to different health plans and one of the big findings was that cost sharing reduced use of health care but had little effect on health outcomes.  My colleague, Robin Hanson, likes to use this as a club to argue that we should cut medical spending in half

Even randomized experiments have problems, however, and it turns out that there was a lot of attrition in the RAND experiment.  A Healthy Blog quotes from a new paper in the October 2007 issue of the Journal of Health Politics, Policy
and Law, by Dr. John Nyman of the University of Minnesota (alas not online).

Of the various responses to cost sharing that were observed in the
participants of the RAND HIE, by far the strongest and most dramatic was in the
relative number of RAND participants who voluntarily dropped out of the study
over the course of the experiment. Of the 1,294 adult participants who were
randomly assigned to the free plan, 5 participants (0.4 percent) left the
experiment voluntarily during the observation period, while of the 2,664 who
were assigned to any of the cost-sharing plans, 179 participants (6.7 percent)
voluntarily left the experiment. This represented a greater than sixteenfold
increase in the percentage of dropouts, a difference that was highly significant
and a magnitude of response that was nowhere else duplicated in the experiment.

What explains this? The explanation that makes the most sense is that the
dropouts were participants who had just been diagnosed with an illness that
would require a costly hospital procedure. … If they dropped out, their coverage
would automatically revert to their original insurance policies, which were
likely to cover major medical expenses (such as hospitalizations) with no
copayments … As a result of dropping out, these participants’ inpatient stays
(and associated health care spending) did not register in the experiment, and it
appeared as if participants in the cost-sharing group had a lower rate of
inpatient use. … the cost-sharing participants who remained exhibited a lower
rate of inpatient use than free FFS participants, not because they were
responding to the higher coinsurance rate by forgoing frivolous hospital care
but instead because they did not need as much hospital care, since many of those
who became ill and needed hospital care had already dropped out of the
experiment before their hospitalization occurred. …

Hat tip to The HealthCare Economist.

R. Richard Schweitzer October 18, 2007 at 3:13 pm

shame on y’all for bringing this up

sa October 18, 2007 at 4:13 pm

Could be possible. I am suprised RAND didn’t check this for such an important study. I wonder how many more such studies tracking self-reported survey data have such problems.

Rachel Soloveichik October 18, 2007 at 4:46 pm

I remember reading recently something old written about the experiment. They did mention the different drop-out rates and tried to account for why people left. They said it didn’t make much of a difference (but I forget why)

Joseph Delaney October 18, 2007 at 5:37 pm

“I remember reading recently something old written about the experiment. They did mention the different drop-out rates and tried to account for why people left. They said it didn’t make much of a difference (but I forget why)”

To really evaluate this we’d have to know something about the outcomes of people who left. You can make “guesses” about this but the data support some rather interesting possibilities now. For, the difference in hospitalizations was only one third the size of the difference in attrition meaning that the data could plausibly support a case where these with co-pays had higher net hospitalization rates.

Given that the interpretation is equivalent health outcomes at lower costs, this could matter a lot.

KapKool October 18, 2007 at 6:41 pm

In the original article in the New England Journal of Medicine, they discussed different retention rates and why they did not think it was too much of a problem. You can read their initial reasons why on page 14 in the pdf of the original article by using the following link:

http://www.rand.org/pubs/reports/2006/R3055.pdf

Hanson also linked directly to this pdf in his Overcomingbias post about the study:

http://www.overcomingbias.com/2007/05/rand_health_ins.html

Arnold Kling October 18, 2007 at 7:14 pm

The author of the anti-RAND article, John Nyman, has a huge personal investment in his view that health insurance is good and everyone who suspects that it causes overconsumption of health care is wrong. Trusting him on this issue would be like trusting a die-hard supply-sider’s analysis of the Bush tax cuts.

Look at various analyses before drawing conclusions.

Joseph Delaney October 18, 2007 at 7:49 pm

Yeah, I have been trying to trace the limited information follow-up. They clearly made a noble effort and in a modern trial they’d have a flow chart. But I am still not convinced that this removes the threat to validity of non-random censoring — it just lowers the size of the possible bias.

It is a pity that nobody has ever considered replicating it.

Mikael S. October 19, 2007 at 4:18 am

Arnold Kling>> Ok, so he got the results he “wanted”. Something I think is a problem among many economists,
and social scientists in general. Instead of this strange attack on the study, can’t you explain what is
incorrect in the study, rather than arguing that he has an agenda (and you don’t, right?).

RobbL October 19, 2007 at 9:21 am

Sa,

If you assume the extreme counterfactual that they were all dead then the catastrophic plan had more than 300% higher mortality.

(11/1294 for the free plan versus 31/759 for the catastrophic plan)

Yancey Ward October 19, 2007 at 11:46 am

John Pertz,

You are right to chastise me for questioning the integrity and competence of their methods without actually reading the relevant section of their study first to see if the criticisms leveled at it were, in fact, legitimate. I just assumed that Alex Tabarrok had checked the original paper to see if the criticisms by Nyman were completely on target. It now appears that the Rand researchers did all they could reasonably be expected to do to account for the dropouts and their potential impact on the stated results.

Joseph Delaney October 20, 2007 at 2:29 am

“Intention to Treat” is able to handle poor treatment adherence and switching. It can’t handle loss to follow-up without an assessment of the outcomes of the lost participants.

Serg December 16, 2007 at 2:39 pm

In men for whom sexual activity is not recommended because of their underlying cardiovascular status, any treatment for erectile dysfunction, including Levitra,
generally should not be used. In patients taking certain CYP3A4 inhibitors (eg, ritonavir, indinavir, saquinavir, atazanavir, ketoconazole, itraconazole, erythromycin, and clarithromycin), lower doses of Levitra are recommended, and time between doses
of Levitra may need to be extended. See prescribing information for Levitra for
dosing guidance.In clinical trials, the most commonly reported adverse events with Levitra were headache, flushing, and rhinitis. Adverse events were generally transient.

Serg December 16, 2007 at 2:40 pm

The recommended starting dose of Levitra is 10 mg. Titrate up to 20 mg or down
to 5 mg based on efficacy and side effects. The maximum recommended dosing frequency is once daily. Levitra is available in
2.5-mg, 5-mg, 10-mg and 20-mg tablets. For Prescribing Information please visit New Online Pharmacy

Comments on this entry are closed.

Previous post:

Next post: