The life of an academic con man

by on April 27, 2013 at 3:26 am in Education, Science | Permalink

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Here is more, interesting throughout.  I liked this part too:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

One of the best articles I’ve read this year, the author is Yudhijit Bhattacharjee.

1 Steve Sailer April 27, 2013 at 4:00 am

Tell people what they want to hear and they won’t ask too many questions. From the article:

That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. …
On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.

“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.” …

2 Yancey Ward April 27, 2013 at 11:02 am

Tell people what they want to hear and they won’t ask too many questions.

I don’t think there is a more succinct and accurate description of his method.

3 JWatts April 27, 2013 at 4:23 pm

Exactly this.

4 aaa April 27, 2013 at 6:28 am

His lifelong obsession with elegance and order is what led him to publish faked research? How does the NYT come up with this stuff?

5 john personna April 27, 2013 at 7:29 am

An allusion to R&R, obviously.

6 Yancey Ward April 27, 2013 at 11:03 am

An allusion to the entire profession of economics, I am afraid to say.

7 john personna April 27, 2013 at 11:45 am

I suppose that strictly speaking Nassim Taleb and Justin Fox are not economists, and thus their cautions are from the outside. For that matter, the behavioralists seem to come in as psychologists …

8 Yancey Ward April 27, 2013 at 11:06 am

Actually, I think it describes the entire field of economics perfectly. They even fake themselves out without knowing it.

9 F. Lynx Pardinus April 27, 2013 at 11:48 am

You omitted the full NYT quote: “His lifelong obsession with elegance and order, *he said,* led him to concoct sexy results that journals found attractive.” [emphasis mine]

10 JWatts April 27, 2013 at 4:26 pm

That was his rationalization. The NYTimes wasn’t necessarily agreeing with it.

11 a-non April 27, 2013 at 6:33 am

“Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Data point on Stapel from MR (using vampire movies as the knowledge base): http://marginalrevolution.com/marginalrevolution/2008/09/mirrors-as-a-me.html

12 a-non April 27, 2013 at 7:06 am

Beruhige dich. I clicked through several levels from the post and not even the academic journal page had a flag on the article. Illustration, one data point, only. I thought the text was hilarious that’s why I shared, not to be grumpy.

13 prior_approval April 27, 2013 at 7:45 am

Keine Panik – the calming has already been done by the overseers of the site.

14 Rahul April 27, 2013 at 9:18 am

+1

I googled up his Western Blot images and not a word on that Journal manuscript webpage saying this was retracted.

15 Rahul April 27, 2013 at 9:20 am

Brainfart. My bad.

Reading Stapel’s case led me to this guy, another fraud but in neurology.

http://ori.hhs.gov/content/case-summary-adibhatla-rao-m

16 jcaldwell April 27, 2013 at 7:16 am

Indeed.

17 Bill April 27, 2013 at 7:10 am

The Rogoff and Reinhart award goes to this guy

18 Jan April 27, 2013 at 10:55 am

I was just thinking that. Great timing for this article.

19 Thor April 27, 2013 at 2:24 pm

Oh? did they just make up stuff for years and years?

20 Jan April 27, 2013 at 2:31 pm

As R&R said, “Well, technically, I never stated that exactly…”

21 Cliff April 28, 2013 at 9:58 pm

What, the Excel spreadsheet mistake award? The legitimate dispute among scholars regarding appropriate weighting methods award?

22 Aw C'mon now. April 29, 2013 at 11:14 am

Aw, c’mon Bill.

To say that R&R’s spreadsheet error is anything comparable to the outright fraud, over a long period of time that this guy practiced, is ludicrous.

When you make totally over-the-top claims like that, all it does is reveal progressive/Keynesian cultism.

23 Curt F. April 27, 2013 at 7:22 am

The real problem is that attitude that studies “fail” if they don’t show preferred outcomes. From the article (and BTW this quote is not coming from the fraudster): “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.” There it is folks. Studies — no matter how well design — “don’t work” if they don’t produce the desired outcome.

24 prior_approval April 27, 2013 at 7:41 am

‘“don’t work” if they don’t produce the desired outcome’

Just tell that to the pharmaceutical industry – the excerpt is from the book “Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients” which has already been highlighted here ( http://marginalrevolution.com/marginalrevolution/2012/11/bad-pharma-by-ben-goldacre.html), showing how private companies cannot be trusted to report accurate information on matters that affect their self-interest. And as a bonus, the information in the excerpt is based on empirical data.

‘Sponsors get the answer they want.

Before we get going, we need to establish one thing beyond any doubt: Industry-funded trials are more likely than independently funded trials to produce a positive, flattering result. This is our core premise, and one of the most well-documented phenomena in the growing field of “research about research.” It has also become much easier to study in recent years because the rules on declaring industry funding have become a little clearer.

We can begin with some recent work. In 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug — antidepressants, ulcer drugs and so on — and then measured two key features: were they positive, and were they funded by industry? They found over 500 trials in total: 85 percent of the industry-funded studies were positive, but only 50 percent of the government-funded trials were. That’s a very significant difference.

In 2007, researchers looked at every published trial that set out to explore the benefit of a statin. These are cholesterol-lowering drugs which reduce your risk of having a heart attack, and they are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were 20 times more likely to give results favoring the test drug. Again, that’s a very big difference.

We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a 10-year period, finding 542 trial outcomes in total. Industry sponsors got favorable outcomes for their own drug 78 percent of the time, while independently funded trials only gave a positive result in 48 percent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: You would only win a measly 28 percent of the time.’

http://www.salon.com/2013/01/27/bad_pharma_drug_research_riddled_with_half_truths_omissions_lies/

25 dearieme April 27, 2013 at 7:35 am

I have run into the occasional crook from my first academic job onwards. Two I declined to work with, and therefore of whom I have no first-hand experience, became FRSs and Masters of Oxbridge colleges, receiving honours from the Queen en route: I was saved from working with them by being cautioned against it by colleagues. You’ll notice that their crookedness was therefore pretty well-known in private. The allegation in each case involved stealing data from colleagues, and stealing ideas by abusing privilege – e.g. stealing the idea on an applicant for a prestigious fellowship when the crook in question was on the selection panel. Whether those two also fabricated data I don’t know; given their moral character it must be a distinct possibility. On the other hand, they worked in fields where experiments might be replicated so perhaps they restrained themselves to polishing up their data a bit.

The only consolation I have is that the first crook I encountered was eventually sacked from his Chair at an ancient university, under the guise of ill-health retirement. That’s not an easy thing to achieve but it was probably made easier by his also indulging in theft.

And yet; there is now a whole field, Climate Science, where it sometimes seems that the crooks outnumber the honest men. It’s a funny old world.

26 prior_approval April 27, 2013 at 10:25 am

Personally, I always find real time data to be the best. Though the link itself is to a monthly summary, the science is really fascinating, if one favors empirical measurements. Especially empirical measurements which demonstrate just how flawed earlier models have been in their assumptions – http://nsidc.org/arcticseaicenews/

27 aaa April 27, 2013 at 8:21 am

After reading the paper, what strikes me is how worthless the faked research is, even if it would have been real. I think this reflects on the “science” of psychology. Choice quote: “Sitting at his kitchen table in Groningen, he began typing numbers into his laptop that would give him the outcome he wanted. He knew that the effect he was looking for had to be small in order to be believable; even the most successful psychology experiments rarely yield significant results.”

28 Merijn Knibbe April 27, 2013 at 8:37 am

Economists call the very same thing the ‘callibration’ of their models – and get away with it.

29 Sam April 29, 2013 at 8:12 pm

Except that ‘calibration’ in economics almost always entails changing initial parameter values when using algorithms or dynamic systems models, using a more appropriate estimator, or altering their method of estimation (I’m thinking GMM vs MLE or extremum estimators in econometrics). Making up data like this clown did is completely different. And yet economics (most specifically micro) will never have the same reputation for needing to publish only models that ‘work’ or produce novel results. Simply showing that there might not be a statistically significant effect for something in itself may be important for a body of literature. The problem is, most journalists reporting on psychology pick up any study as a definitive result, while most economic materials are so intellectually inaccessible to the general media public that they either can’t report a single story as being of significance enough to report on, or they just appeal to a supposed ‘consensus’ of the literature by asking some person they know.

30 mw April 27, 2013 at 9:03 am

The worst thing about overt corruption like this is that it gives people someone to pile onto while drawing attention away from the bigger structural problems, like widespread poor understanding of math & statistics esp in psychology and biology which, how coincidentally, turn out to be the places where experiments seem to be least repeatable and literature reviews turn up improper use of statistics most frequently.
It’s like the feeling of relief we get when Justice nabs Blogojevich–no need to worry therefore about how the psychology of Congress and the sphere of laws it considers are affected by primarily talking to, being around, and getting advice from people with lots of money.

31 anon April 27, 2013 at 10:30 am

But, but, but…IT’S SCIENCE!

Just need to sprinkle the right magic words around.
Paraphrasing Nick Gillespie,

It’s stunning what people will excuse if the right magic words are sprinkled over data.

(“It’s stunning what people will excuse if the right magic words are sprinkled over the repression.” Nick Gillespie
http://reason.com/blog/2013/03/05/rep-jose-serrano-on-hugo-chavez-a-leader )

32 JWatts April 27, 2013 at 4:36 pm

The phrase, useful idiots, is probably one of the best ever.

33 Dana April 27, 2013 at 10:48 am

Funny these passages didn’t merit mention here:

“People think of scientists as monks in a monastery looking out for the truth,” he said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”

What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.” He named two psychologists he admired — John Cacioppo and Daniel Gilbert — neither of whom has been accused of fraud. “They give a talk in Berlin, two days later they give the same talk in Amsterdam, then they go to London. They are traveling salesmen selling their story.”

34 Mark Thorson April 27, 2013 at 12:37 pm

Are you suggesting that academics that travel around a lot giving talks are somehow less trustworthy?

35 Thor April 27, 2013 at 2:27 pm

Why is this any different when trusted figures in positions of authority turn out to be abusing that trust?

Answer: it isn’t different. And that’s why guys like Stapel are so infuriating.

36 Boonton April 27, 2013 at 12:34 pm

Speaking of R-T, it does sound a bit too ‘good’. Someone just happens to incorrectly copy an excel formula all the way down leaving out three data points, just enough to trip a failure to confirm a hypothesis to a weak success? Has an effort been made to examine their previous papers and working Excel documents to see if this too may not be a pattern. Right wingers have spent years crying ‘fraud’ and ‘conspiracy’ regarding global warming for a lot less than this.

37 Cliff April 28, 2013 at 10:01 pm

It didn’t change the conclusion of the paper at all and was wholly consistent with their median calculations, which is what they originally stressed. Why would they fake the mean in a clumsy fashion and not the median?

38 Boonton April 27, 2013 at 12:52 pm

Speaking of R-T, it does seem a bit ‘good’ as well. Just leaving off 3 cells in an Excel formula makes a non-result into a weak result? How easy would it have been to have purposefully allowed that ‘accident’ to happen to make a paper publishable? Has an effort been made to scrutinize their previous papers and Excel documents to see if these ‘errors’ have a habit of happening often in their work? Keep in mind the right wing spent years screaming about ‘conspiracy’ and ‘fraud’ for a lot less than this when it came to global warming.

39 Brian Donohue April 27, 2013 at 1:53 pm

Is it just me, or do examples of what our political enemies do leap effortlessly to mind here, while examples of pitfalls to which we ourselves are prone don’t present themselves so readily? Kahneman is not surprised- nor is Jesus for that matter.

Ultimately, I’m a Popperian optimist. Well-designed experiments can yield results, but science is about earnest efforts to falsify, so it’s a long, slow slog in fields like economics.

40 hanmeng April 27, 2013 at 10:20 pm

You may think that psychology or the other social sciences are bad, but ‘“sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be’ is what characterizes the vast majority of literary criticism. Stapel’s not the only one who has “having a fuzzy, postmodernist relationship with the truth”.

41 Boonton April 28, 2013 at 2:49 pm

Perhaps a little economic reasoning can be combined with the observations here to help us account for the possibility of fraud when we do not have the reasources to really check for it.

Stapel’s method was to produce datasets that did not shock but conformed to existing theories. His ‘creativity’ showed up in creative experiments to ‘check’ established theories and being well in line. This then provides us with some guidelines:

1. Major theories are unlikely to be wrong and accepted only because of fraud. A good fraudster does not fake data to overturn a theory but confirm it. Data that overturns a theory invites scrutiny and skepticism, which the fraudster wants to avoid.

2. Studies that simply confirm what has already been established should be given less weight since they are more likely to be frauds. A thousand new physics experiments that essentially say “nothing seems to be able to move faster than light” probably has some fraud among them. A set of experiments that say “some things seem able to move faster than light” are more likely to have simple mistakes rather than purposeful fraud.

3. Unlike in sports, researchers with better track records are less trustworthy. A major factor in uncovering Stapel seems to be that he always got it right, his experiments always produced relevant results. A good question to ask then is “how many times did he get it wrong?” If the answer is never, it’s time to start asking him to turn over his spreadsheets and raw data.

3.1 As a corrollary, one should ask “how often has this person published results that confirmed his theories/ideology/positions etc.”. An economist who supports stimulus who says something like “X didn’t stimulate as much as I thought” is less likely to engage in fraud than one who always asserts “I looked for X and there it was!”

4. Double blinding might be a solution here. Instead of simple ‘peer review’, do the following: the author must submit the raw data for ‘peer calculation’. Some random element of his data will be calculated. A small thing that’s not hugely relevant for the overall conclusion….such as in the study of children the # of 12 yr old boys who didn’t answer question 5. What’s important is that the reviewers will NOT know what the researcher got. Afterwards the two will be compared and mismatches will require more investigation. The uncertainity on which aspect of the data will be examined from the bottom up will either deter fraud by making it more difficult or require to fraudster to be more extensive in his fraud thereby decreasing the odds of being able to do it very often without being caught.

Comments on this entry are closed.

Previous post:

Next post: