Facts about publication bias

by on October 12, 2008 at 10:21 am in Books, Science | Permalink

In 1995, only 1 percent of all articles published in alternative medicine journals gave a negative result.  The most recent figure is 5 percent negative.

That is from Ben Goldacre’s excellent Bad Science, right now available only in the UK.  This is one of the best books I have read on how to think like a scientist and how to critically evaluate evidence and also on why we don’t have a better press corps when it comes to science.

I thank Michelle Dawson for the pointer to the book.

1 Richard Gadsden October 12, 2008 at 10:26 am

Ben also has a blog at badscience.net

2 Rana October 12, 2008 at 10:45 am

Yes, an excellent book, an excellent blogsite and an excellent weekly article in the Guardian newspaper. In fact, in my opinion, it is simply the best weekly column in the British media.

And I know that US and UK have spelling differences, but Goldaker vs Goldacre shouldn’t really be allowed 🙂

3 Craig October 12, 2008 at 11:12 am

A small correction. His last name is actually spelled Goldacre.

4 rjh October 12, 2008 at 12:09 pm

I would also like to know the percentage of failure reports in conventional medical journals. I do know that it is low enough that academic medical practitioners are publicly concerned. They want the data from failed attempts made public, not just that from successes. The absence of failure data is seriously hindering the progress of research and evaluation of potential problems.

There are two major reasons that failures of traditional medicine do not get published. First, the research sponsors do not want to reveal their research directions, nor their embarassment from the universally low success rate of new drugs and techniques. Second, there is a long standing bias against publishing failure in all branches of science. Failures will not win a Nobel prize, and articles about failures will not get the article references needed for a good academic rating.

5 Richard Koffler October 12, 2008 at 2:00 pm

A good read for science laymen: “Trick or Treatment”. Reviews the history of acupuncture, homeopathy, chiropractic treatment and herbal medicine, and explains the scientific evidence that has shown how essentially useless they are. Also out of the UK, although many of the studies they review are from the US.

6 Andrew October 12, 2008 at 3:38 pm

At least if you are getting acupuncture, you aren’t in a hospital receiving a fatal medical mistake.

That said, I would really like to read a paper the equivalent of “I, procedure” that would go through in detail how a truly successful medical miracle came about. I have a feeling a strong involvement in most of them is a surgeon, internist, or orthopedist willing to go out on a limb.

7 Tom Hanna October 12, 2008 at 5:01 pm

Why would you bother announcing failure to the world? Take the time you’d spend writing it up and try the next thing until you get something worth sharing – something that works. Sounds like the alternative medicine people just need to wipe out that last 5%.

8 SteveSC October 12, 2008 at 5:38 pm

Data on unpublished negative studies are naturally hard to come by, but a few days ago Pfizer was sued for hiding negative studies on Neurontin. According to the New York Times, 10/8/08:

“Dr. Dickersin, the Johns Hopkins expert, said that of 21 studies she reviewed, five were positive and 16 negative, meaning they did not prove the drug was effective. Of the five positive studies, four were published in full journal articles, yet only six of the negative studies were published and, of those, two were published in abbreviated form.”

Given that the vast majority of alternative medicine studies are small (due to lack of funding) and thus less likely to reach statistical significance, and pharma trials sponsored by industry are often much larger (a study with 1,000s of patients can find statistical significance even with the clinical significance is almost nil), one could question how many FDA approved drugs would be in the same research status as alternative medicine were it not for the millions of dollars thrown at them. Some studies have indicated that entire classes of drugs, such as antidepressants, are essentially placebos.

Although the standard estimate of the placebo effect is 30-40% positive results, studies have shown that when both the patient and doctor are convinced the treatment works (even when the treatment is later proven totally useless), positive results occur in 60-70% of patients.

I agree with Andrew (3:38 PM) that in conditions of uncertainty like this, using a ‘placebo’ with a low risk profile is preferable.

9 Justaguy October 12, 2008 at 6:18 pm

Its impossible to evaluate the significance of that figure without knowing the percentage of negative results that get written up in biomedical journals, and in scientific journals in general. Is this a failing of alternative medical journals, or the way in which we approach research in general?

When trying to find the answer to that question – yes negative results are underreported in biomedicine, and industry funding probably leads to bad studies – I came across the Journal for Negative Results in Biomedicine. http://www.jnrbm.com/home/

Not something that I’ll be digging into any time soon, but good to know that its out there.

10 Ronald Hayden October 12, 2008 at 8:02 pm

For an interesting interview with Goldacre, listen to this episode of the Skeptic’s Guide to the Universe podcast:

http://media.libsyn.com/media/skepticsguide/skepticast2008-09-17.mp3

The podcast is one of my favorites; it’s an irreverent group discussion of science and skeptical topics in the news, by people who really know their stuff. To subscribe to it, go here:

http://www.theskepticsguide.org/

11 CG October 12, 2008 at 9:17 pm

The publication bias in empirical economics research is unsurprising when you consider that finding nothing often means insignificant results. It is difficult to differentiate between running regressions of the wrong functional form, data shortcomings, and the existence of no statistical relationship. Hence, except in the occasion of a precisely estimated zero, it’s unlikely to see publications finding no effect.

12 Andrew October 13, 2008 at 3:52 am

I think the important point is that most things don’t work, and most statistics are wrongly done, so the positive results are probably not true.

Back when I was reading about chemotherapy, I got the impression that it is not effective in a vast majority of patients, but it is still administered, it does cause neuropathy in many, and with hemotoxicity form the major dose-limiting conditions.

This is a special case because patients are dying and will try anything, but as an example, the situation doesn’t really jibe with most peoples’ assumption of how the world works.

I suppose all that’s left is to blame these failures on the free market we have in medicine.

13 Andrew October 13, 2008 at 4:07 am

Oh, and one more, sorry, but it’s just me up at 4 a.m.

I was told by my advisor I couldn’t publish a result that didn’t work. Later he told me I misunderstood him. Unfortunately, I still don’t understand.

Another problem was that I couldn’t get a picture in 63x so I asked what was acceptable and what was optimal. He said 63x was acceptable and 63x was optimal. He wasn’t trying to be humorous.

Student-advisor communication issues are a separate issue, but I wonder if identical conversations have happened between submitters and reviewers. Committees would probably accept negative outcomes if the researcher put his foot down and said “that’s all you get.” But if you ask someone what they want, they will tell you what they prefer. I suspect they prefer the dramatic over the important.

14 Mark October 14, 2008 at 1:32 am

I’ve been in biomedical science my whole career (I’m now a professor with grant support for my lab, etc.)
I find these posts on science fairly frustrating – here is a question for Tyler and Alex – do you two actually *talk* to working biomedical scientists to get some perspective? I mean, we biomed people do exist in quite large numbers!

The frustration here is as follows, to begin:
(1) there is an extremely large difference between biomed studies that are clinical studies like “drug x was tested in 92 patients” and biomed work on a more fundamental basis like “a phosphatase cascade by which rewarding stimuli control nucleosomal response” (FYI, this second one is a major paper in Nature). The whole interpretive framework, issues with reproducibility, ability to do follow-up studies/different types of experimental testing, is different.
(2) the old adage “absence of presence does not mean presence of absence” is a near-dogma in the more “fundamental” biomedical research areas; in other words, you don’t trust negative results. In contrast (and rightly so, to my mind), negative results are of course often more clear in clinical trials.

Really, you need to stop confounding different parts of biomedical science. The easiest way to understand this is to simply spend some time with some half-decent biomedical scientists – like some that do clinical work and others that do more basic work.

Again, thanks for the fantastic blog but please learn how biomed science really works! Your statements on science and biomed science are quite misleading, in my view.

Mark

15 Harald K October 14, 2008 at 12:31 pm

If it’s avaliable only in the UK, how come I bought it at the subway here in Oslo? 🙂

Actually, I thought about writing a review somewhere. It’s an interesting book. It is, however, not all that well-written. It reads more like a good blog than a good book. Maybe this is just contrast after reading Jared Diamond and Barack Obama.

16 nana May 13, 2009 at 3:11 am

Is it realistic?

17 tom May 13, 2009 at 3:14 am

they are the old artilces

Comments on this entry are closed.

Previous post:

Next post: