Small samples mean statistically significant results should usually be ignored

Size really matters: prior to the era of large genome-wide association studies, the large effect sizes reported in small initial genetic studies often dwindled towards zero (that is, an odds ratio of one) as more samples were studied. Adapted from Ioannidis 2001 et al., Nat Genet 29:306-309.

Genomes Unzipped: In October of 1992, genetics researchers published a potentially groundbreaking finding in Nature: a genetic variant in the angiotensin-converting enzyme ACE appeared to modify an individual’s risk of having a heart attack. This finding was notable at the time for the size of the study, which involved a total of over 500 individuals from four cohorts, and the effect size of the identified variant–in a population initially identified as low-risk for heart attack, the variant had an odds ratio of over 3 (with a corresponding p-value less than 0.0001).

Readers familiar with the history of medical association studies will be unsurprised by what happened over the next few years: initial excitement (this same polymorphism was associated with diabetes! And longevity!) was followed by inconclusive replication studies and, ultimately, disappointment. In 2000, 8 years after the initial report, a large study involving over 5,000 cases and controls found absolutely no detectable effect of the ACE polymorphism on heart attack risk.

The ACE story is not unique to the ACE polymorphism or to medical genetics; the problem is common to most fields of empirical science. If the sample size is small then statistically significant results must have big effect sizes. Combine this with a publication bias toward statistically significant results, plenty of opportunities to subset the data in various ways and lots of researchers looking at lots of data and the result is diminishing effects with increasing confidence, as beautifully shown in the figure.

For more see my post explaining Why Most Published Research Findings are False and Andrew Gelman’s paper on the statistical challenges of estimating small effects.

Addendum: Chris Blattman does his part to reduce bias. Will journal editors follow suit?

Comments

Comments for this post are closed