Chris Blattman on randomized control trials

As usual, he is wise:

Yes, the randomized evaluation remains the "gold standard" for
important (albeit narrow) questions. Social science, however, has a
much bigger toolbox for a much broader (and often more interesting)
realm of inquiry. If you want to know the effects of small binary
treatments, you are in business. If you find any other question in the
world interesting, you have some more work to do. Dani Rodrik has made
a similar point here.

Don’t
get me wrong: a large number of my projects are randomized control
trials. They are eminently worth pursuing. But to be honest, uncovering
the causes of effects excites me more than measuring the effects of
causes. An evaluation masters the second, but only hints at the first.
The hardest and most rewarding work is the theoretical and
investigative work that comes with uncovering the underlying rhythms
and rules of human behavior.

…If your goal is to improve the delivery of aid, and truly advance
development, many more skills and knowledge are involved than the
randomized evaluation. See here for
more. But in short: a well-identified causal impact that arrives two
years after the program does not performance management make.

Chris also points us toward a new and excellent blog, Obama in Kenya.

Comments

"i am defeated." -- john mccain

I cannot agree and I'm flabbergasted that Chris Blattman would write such a thing.

The problem with current "theory" in economics is that it is divorced from reality and randomized can be one tool used to change that.

Good science has a good feedback loop. Theory informs practice and practice refines theory -- and perhaps leads to new theory.

This is where randomized trials can play and important role. If theory says a certain intervention or input causes a certain result, but the trials shows that this is not the case, then the theory needs to be looked at again.

Additionally, what does Blattman mean by "investigative work". That's vague. What other methods can be used to untangle the web of causalities to the degree that randomized trials can?

Don't get me wrong -- randomized trials are full of their own problems. But to assume that they are only useful in measuring the effects is somewhat misleading.

randomization is supposed to be a way to deal with the issue of self selection; it is a way of rationalizing the identification assumption necessary to obtain a point estimate when comparing group means.

unfortunately, it purchases in-sample comparability at the price of inducing a different selection effect, i.e., who participates in the study. consider: in the presence of a treatment that helps some people and not others, persons who know the existing treatment works may be less likely to participate in a study that might randomize them to something else that may not work. thus, randomization can yield (and i would argue generally does yield) an accurate estimate of a parameter that is *not* the one the researcher is interested in.

in principle, econometric models of endogeneity can be used to deal with the selection effect, i.e., the correlation of unobserved factors affecting the choice of treatment and the outcome of interest. unfortunately, there aren't a huge number of models of endogeneity that allow for multiple treatment effects, which is the interesting case (but then, there aren't good approaches for this in rct either - they generally focus on some flavor of random coefficient models, usually in conjunction with a more or less unjustified normality assumption). so blattman is right, to the extent that he is arguing that we need better models.

the hard question is what's the better second-best alternative: rct that induce a selection bias vs models of endogeneity that aren't particularly good models of the data generating process. it's a tough question that is largely blown off, in particular by the biostats field and regulatory agencies, e.g., fda

Comments for this post are closed