Interpreting the results of the Oregon Medicaid experiment

There is a new and probably very important paper by Amy Finkelstein, Nathaniel Hendren, and Erzo F.P. Luttmer:

We develop and implement a set of frameworks for valuing Medicaid and apply them to welfare analysis of the Oregon Health Insurance Experiment, a Medicaid expansion that occurred via random assignment. Our baseline estimates of the welfare benefit to recipients from Medicaid per dollar of government spending range from about $0.2 to $0.4, depending on the framework, with a relatively robust lower bound of about $0.15. At least two-fifths – and as much as four-fifths – of the value of Medicaid comes from a transfer component, as opposed to its ability to move resources across states of the world. In addition, we estimate that Medicaid generates a substantial transfer to non-recipients of about $0.6 per dollar of government spending.

An implication of this is that the poor would be better off getting direct cash transfers: “Our welfare estimates suggest that if (counterfactually) Medicaid recipients had to pay the government’s cost of their Medicaid, they would not be willing to do so.”

And perhaps this sentence could use the “rooftops treatment”:

It is a striking finding that Medicaid transfers to non-recipients are large relative to the benefits to recipients; depending on which welfare approach is used, transfers to non-recipients are between one-and-a-half and three times the size of benefits to recipients.

Or this:

Across a variety of alternative specifications, we consistently find that Medicaid’s value to recipients is lower than the government’s costs of the program, and usually substantially below. This stands in contrast to the current approach used by the Congressional Budget Office to value Medicaid at its cost. It is, however, not inconsistent with the few other attempts we know of to formally estimate a value for Medicaid; these are based on using choices to reveal ex-ante willingness to pay, and tend to find estimates (albeit for different populations) in the range of 0.3 to 0.5.

Might the program in fact be a bad idea?


Comments for this post are closed