A Fine Theorem on RCTs and the new Nobel Laureates

In this vein, randomized trials tend to have very small sample sizes compared to observational studies. When this is combined with high “leverage” of outlier observations when multiple treatment arms are evaluated, particularly for heterogeneous effects, randomized trials often predict poorly out of sample even when unbiased (see Alwyn Young in the QJE on this point). Observational studies allow larger sample sizes, and hence often predict better even when they are biased. The theoretical assumptions of a structural model permit parameters to be estimated even more tightly, as we use a priori theory to effectively restrict the nature of economic effects.

We have thus far assumed the randomized trial is unbiased, but that is often suspect as well. Even if I randomly assign treatment, I have not necessarily randomly assigned spillovers in a balanced way, nor have I restricted untreated agents from rebalancing their effort or resources. A PhD student of ours on the market this year, Carlos Inoue, examined the effect of random allocation of a new coronary intervention in Brazilian hospitals. Following the arrival of this technology, good doctors moved to hospitals with the “randomized” technology. The estimated effect is therefore nothing like what would have been found had all hospitals adopted the intervention. This issue can be stated simply: randomizing treatment does not in practice hold all relevant covariates constant, and if your response is just “control for the covariates you worry about”, then we are back to the old setting of observational studies where we need a priori arguments about what these covariates are if we are to talk about the effects of a policy.

There is much more of interest in the post, very high quality as you might expect given the source.

Comments

I watched a cool short video about RCTs a few days ago:

https://www.youtube.com/watch?v=lCaszQEvugQ

"Observational studies allow larger sample sizes, and hence often predict better even when they are biased."

The Pharma industry was bound to observational trials with SSRI and other antidepressants (there is as of now still no biomarker that can be used). Statistics are all over the place and it's difficult to figure out how to treat patients suffering from depression without a lot of trial and error. Perhaps neural imaging will get to a point when it can be used but that's still in the future.

This is incorrect. SSRIs as well as plenty of other antidepressants have repeatedly been shown to be more effective than placebo in RCTs. Neural imaging is very much in the dark age. Psychiatric care is very much trial and error, but there is little doubt that antidepressants are better than placebo for moderate and severe depression. They are not better than placebo for mild depression or 'phase of life' problems.

Excellent post. Consider the following: EVEN when medical researchers do enormous (3000+ patients), double blinded RCTs, the results not infrequently fail to replicate or get contradicted. The nature of a medical RCT is generally stronger than an 'economic RCT' - but they are still somewhat suspect. Should we really trust economic RCTs?

Indeed. I would rather die than support Mr. Waldir.

Wow -- we've got yet-another Thiago, folks!!

Keep those scorecards handy.

PS Tyler, no one gives a shit about your post.

A very good post indeed.

The point about a priori theory being able to guide or restrict the estimation procedures is a good one, and IMO is at the heart of what makes econometrics different from statistics as a field even though they're both based on the same fundamental principles of probability and statistics.

And this principle can also be used to critique the machine learning techniques that have been in vogue for awhile: they're data-driven and a-theoretical. I'm no Platonist, but to rely solely on the data is to rely on a limited and narrow view of the world, like the prisoners in Plato's cave.

A decent economist looks for the unseen.

A wise economist knows there is still the unknowable.

That's probably true, but where do we find the wise ones?

Not here, obviously.... but WHERE???

Wonder if there is an effect where many observational studies are as tightly controlled for bias as they can be because they are seeking credibility against the "but it isn't an RCT" whereas RCT more often rests on the principles of the RCT process to ensure quality and researchers are overlooking case specific biases that sneak in.

Random trials are self proofed. They prove that under the given, unknown conditions, treatment A does one thing, treatment B does not.

But the applications is right there. If you liked A's results, do more of it, right then and there and in surroundings; and do a little less B. You will, almost certainly, get more A results in the nearby if you really do more A stuff.

The key is to find the exception, or boundary of your theory and remember to stop doing A stuff when reached.

The solution to small N experiments isn’t fewer experiments but more of them and more experiments with large N. Look at the literature on GOTV in political science. What started as a few field experiments has turned into hundreds and knowledge on the topic has grown immensely. Moreover, previous observational work with large N were quite biased.

I published a paper on problems with how randomization is being done in field experiments by Steve Ziliak in the first issue of the Review of Behavioral Economics, 2014, 1, 167-208, "Balanced versus Randomized Field Experiments: Why W.S. Gosset aka 'Student' Matters."

Despite these problems, and the criticisms raised Deaton, Heckman, Stiglitz, and others in an August 2018 letter to the Guardian, I continue to think this is a worthy award, especially assuming they give one to John List in the not-too distant future, although Ziliak's paper criiticizes some of his work as well.

Maybe this is just a dispute about wording, but one could say the physicians moved to the hospitals with better technology precisely because they knew the RCT result/prediction was accurate about the effect of the new coronary intervention.

Comments for this post are closed