The life of an academic con man

The key to why Stapel got away with his fabrications for so long lies in his keen understanding of the sociology of his field. “I didn’t do strange stuff, I never said let’s do an experiment to show that the earth is flat,” he said. “I always checked — this may be by a cunning manipulative mind — that the experiment was reasonable, that it followed from the research that had come before, that it was just this extra step that everybody was waiting for.” He always read the research literature extensively to generate his hypotheses. “So that it was believable and could be argued that this was the only logical thing you would find,” he said. “Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Here is more, interesting throughout.  I liked this part too:

Stapel did not deny that his deceit was driven by ambition. But it was more complicated than that, he told me. He insisted that he loved social psychology but had been frustrated by the messiness of experimental data, which rarely led to clear conclusions. His lifelong obsession with elegance and order, he said, led him to concoct sexy results that journals found attractive. “It was a quest for aesthetics, for beauty — instead of the truth,” he said. He described his behavior as an addiction that drove him to carry out acts of increasingly daring fraud, like a junkie seeking a bigger and better high.

One of the best articles I’ve read this year, the author is Yudhijit Bhattacharjee.


Tell people what they want to hear and they won't ask too many questions. From the article:

That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. ...
On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.

“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.” ...

Tell people what they want to hear and they won’t ask too many questions.

I don't think there is a more succinct and accurate description of his method.

Exactly this.

His lifelong obsession with elegance and order is what led him to publish faked research? How does the NYT come up with this stuff?

An allusion to R&R, obviously.

An allusion to the entire profession of economics, I am afraid to say.

I suppose that strictly speaking Nassim Taleb and Justin Fox are not economists, and thus their cautions are from the outside. For that matter, the behavioralists seem to come in as psychologists ...

Actually, I think it describes the entire field of economics perfectly. They even fake themselves out without knowing it.

You omitted the full NYT quote: "His lifelong obsession with elegance and order, *he said,* led him to concoct sexy results that journals found attractive." [emphasis mine]

That was his rationalization. The NYTimes wasn't necessarily agreeing with it.

“Everybody wants you to be novel and creative, but you also need to be truthful and likely. You need to be able to say that this is completely new and exciting, but it’s very likely given what we know so far.”

Data point on Stapel from MR (using vampire movies as the knowledge base):

Beruhige dich. I clicked through several levels from the post and not even the academic journal page had a flag on the article. Illustration, one data point, only. I thought the text was hilarious that's why I shared, not to be grumpy.

Keine Panik - the calming has already been done by the overseers of the site.


I googled up his Western Blot images and not a word on that Journal manuscript webpage saying this was retracted.

Brainfart. My bad.

Reading Stapel's case led me to this guy, another fraud but in neurology.

The Rogoff and Reinhart award goes to this guy

I was just thinking that. Great timing for this article.

Oh? did they just make up stuff for years and years?

As R&R said, "Well, technically, I never stated that exactly..."

What, the Excel spreadsheet mistake award? The legitimate dispute among scholars regarding appropriate weighting methods award?

Aw, c'mon Bill.

To say that R&R's spreadsheet error is anything comparable to the outright fraud, over a long period of time that this guy practiced, is ludicrous.

When you make totally over-the-top claims like that, all it does is reveal progressive/Keynesian cultism.

The real problem is that attitude that studies "fail" if they don't show preferred outcomes. From the article (and BTW this quote is not coming from the fraudster): “I don’t know that I ever saw that a study failed, which is highly unusual,” he told me. “Even the best people, in my experience, have studies that fail constantly. Usually, half don’t work.” There it is folks. Studies -- no matter how well design -- "don't work" if they don't produce the desired outcome.

'“don’t work” if they don’t produce the desired outcome'

Just tell that to the pharmaceutical industry - the excerpt is from the book "Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients" which has already been highlighted here (, showing how private companies cannot be trusted to report accurate information on matters that affect their self-interest. And as a bonus, the information in the excerpt is based on empirical data.

'Sponsors get the answer they want.

Before we get going, we need to establish one thing beyond any doubt: Industry-funded trials are more likely than independently funded trials to produce a positive, flattering result. This is our core premise, and one of the most well-documented phenomena in the growing field of “research about research.” It has also become much easier to study in recent years because the rules on declaring industry funding have become a little clearer.

We can begin with some recent work. In 2010, three researchers from Harvard and Toronto found all the trials looking at five major classes of drug — antidepressants, ulcer drugs and so on — and then measured two key features: were they positive, and were they funded by industry? They found over 500 trials in total: 85 percent of the industry-funded studies were positive, but only 50 percent of the government-funded trials were. That’s a very significant difference.

In 2007, researchers looked at every published trial that set out to explore the benefit of a statin. These are cholesterol-lowering drugs which reduce your risk of having a heart attack, and they are prescribed in very large quantities. This study found 192 trials in total, either comparing one statin against another, or comparing a statin against a different kind of treatment. Once the researchers controlled for other factors (we’ll delve into what this means later), they found that industry-funded trials were 20 times more likely to give results favoring the test drug. Again, that’s a very big difference.

We’ll do one more. In 2006, researchers looked into every trial of psychiatric drugs in four academic journals over a 10-year period, finding 542 trial outcomes in total. Industry sponsors got favorable outcomes for their own drug 78 percent of the time, while independently funded trials only gave a positive result in 48 percent of cases. If you were a competing drug put up against the sponsor’s drug in a trial, you were in for a pretty rough ride: You would only win a measly 28 percent of the time.'

I have run into the occasional crook from my first academic job onwards. Two I declined to work with, and therefore of whom I have no first-hand experience, became FRSs and Masters of Oxbridge colleges, receiving honours from the Queen en route: I was saved from working with them by being cautioned against it by colleagues. You'll notice that their crookedness was therefore pretty well-known in private. The allegation in each case involved stealing data from colleagues, and stealing ideas by abusing privilege - e.g. stealing the idea on an applicant for a prestigious fellowship when the crook in question was on the selection panel. Whether those two also fabricated data I don't know; given their moral character it must be a distinct possibility. On the other hand, they worked in fields where experiments might be replicated so perhaps they restrained themselves to polishing up their data a bit.

The only consolation I have is that the first crook I encountered was eventually sacked from his Chair at an ancient university, under the guise of ill-health retirement. That's not an easy thing to achieve but it was probably made easier by his also indulging in theft.

And yet; there is now a whole field, Climate Science, where it sometimes seems that the crooks outnumber the honest men. It's a funny old world.

Personally, I always find real time data to be the best. Though the link itself is to a monthly summary, the science is really fascinating, if one favors empirical measurements. Especially empirical measurements which demonstrate just how flawed earlier models have been in their assumptions -

After reading the paper, what strikes me is how worthless the faked research is, even if it would have been real. I think this reflects on the "science" of psychology. Choice quote: "Sitting at his kitchen table in Groningen, he began typing numbers into his laptop that would give him the outcome he wanted. He knew that the effect he was looking for had to be small in order to be believable; even the most successful psychology experiments rarely yield significant results."

Economists call the very same thing the 'callibration' of their models - and get away with it.

Except that 'calibration' in economics almost always entails changing initial parameter values when using algorithms or dynamic systems models, using a more appropriate estimator, or altering their method of estimation (I'm thinking GMM vs MLE or extremum estimators in econometrics). Making up data like this clown did is completely different. And yet economics (most specifically micro) will never have the same reputation for needing to publish only models that 'work' or produce novel results. Simply showing that there might not be a statistically significant effect for something in itself may be important for a body of literature. The problem is, most journalists reporting on psychology pick up any study as a definitive result, while most economic materials are so intellectually inaccessible to the general media public that they either can't report a single story as being of significance enough to report on, or they just appeal to a supposed 'consensus' of the literature by asking some person they know.

The worst thing about overt corruption like this is that it gives people someone to pile onto while drawing attention away from the bigger structural problems, like widespread poor understanding of math & statistics esp in psychology and biology which, how coincidentally, turn out to be the places where experiments seem to be least repeatable and literature reviews turn up improper use of statistics most frequently.
It's like the feeling of relief we get when Justice nabs Blogojevich--no need to worry therefore about how the psychology of Congress and the sphere of laws it considers are affected by primarily talking to, being around, and getting advice from people with lots of money.

But, but, but...IT'S SCIENCE!

Just need to sprinkle the right magic words around.
Paraphrasing Nick Gillespie,

It's stunning what people will excuse if the right magic words are sprinkled over data.

("It's stunning what people will excuse if the right magic words are sprinkled over the repression." Nick Gillespie )

The phrase, useful idiots, is probably one of the best ever.

Funny these passages didn't merit mention here:

“People think of scientists as monks in a monastery looking out for the truth,” he said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”

What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.” He named two psychologists he admired — John Cacioppo and Daniel Gilbert — neither of whom has been accused of fraud. “They give a talk in Berlin, two days later they give the same talk in Amsterdam, then they go to London. They are traveling salesmen selling their story.”

Are you suggesting that academics that travel around a lot giving talks are somehow less trustworthy?

Why is this any different when trusted figures in positions of authority turn out to be abusing that trust?

Answer: it isn't different. And that's why guys like Stapel are so infuriating.

Speaking of R-T, it does sound a bit too 'good'. Someone just happens to incorrectly copy an excel formula all the way down leaving out three data points, just enough to trip a failure to confirm a hypothesis to a weak success? Has an effort been made to examine their previous papers and working Excel documents to see if this too may not be a pattern. Right wingers have spent years crying 'fraud' and 'conspiracy' regarding global warming for a lot less than this.

It didn't change the conclusion of the paper at all and was wholly consistent with their median calculations, which is what they originally stressed. Why would they fake the mean in a clumsy fashion and not the median?

Speaking of R-T, it does seem a bit 'good' as well. Just leaving off 3 cells in an Excel formula makes a non-result into a weak result? How easy would it have been to have purposefully allowed that 'accident' to happen to make a paper publishable? Has an effort been made to scrutinize their previous papers and Excel documents to see if these 'errors' have a habit of happening often in their work? Keep in mind the right wing spent years screaming about 'conspiracy' and 'fraud' for a lot less than this when it came to global warming.

Is it just me, or do examples of what our political enemies do leap effortlessly to mind here, while examples of pitfalls to which we ourselves are prone don't present themselves so readily? Kahneman is not surprised- nor is Jesus for that matter.

Ultimately, I'm a Popperian optimist. Well-designed experiments can yield results, but science is about earnest efforts to falsify, so it's a long, slow slog in fields like economics.

You may think that psychology or the other social sciences are bad, but '“sloppy science” — misuse of statistics, ignoring of data that do not conform to a desired hypothesis and the pursuit of a compelling story no matter how scientifically unsupported it may be' is what characterizes the vast majority of literary criticism. Stapel's not the only one who has "having a fuzzy, postmodernist relationship with the truth".

Perhaps a little economic reasoning can be combined with the observations here to help us account for the possibility of fraud when we do not have the reasources to really check for it.

Stapel's method was to produce datasets that did not shock but conformed to existing theories. His 'creativity' showed up in creative experiments to 'check' established theories and being well in line. This then provides us with some guidelines:

1. Major theories are unlikely to be wrong and accepted only because of fraud. A good fraudster does not fake data to overturn a theory but confirm it. Data that overturns a theory invites scrutiny and skepticism, which the fraudster wants to avoid.

2. Studies that simply confirm what has already been established should be given less weight since they are more likely to be frauds. A thousand new physics experiments that essentially say "nothing seems to be able to move faster than light" probably has some fraud among them. A set of experiments that say "some things seem able to move faster than light" are more likely to have simple mistakes rather than purposeful fraud.

3. Unlike in sports, researchers with better track records are less trustworthy. A major factor in uncovering Stapel seems to be that he always got it right, his experiments always produced relevant results. A good question to ask then is "how many times did he get it wrong?" If the answer is never, it's time to start asking him to turn over his spreadsheets and raw data.

3.1 As a corrollary, one should ask "how often has this person published results that confirmed his theories/ideology/positions etc.". An economist who supports stimulus who says something like "X didn't stimulate as much as I thought" is less likely to engage in fraud than one who always asserts "I looked for X and there it was!"

4. Double blinding might be a solution here. Instead of simple 'peer review', do the following: the author must submit the raw data for 'peer calculation'. Some random element of his data will be calculated. A small thing that's not hugely relevant for the overall conclusion....such as in the study of children the # of 12 yr old boys who didn't answer question 5. What's important is that the reviewers will NOT know what the researcher got. Afterwards the two will be compared and mismatches will require more investigation. The uncertainity on which aspect of the data will be examined from the bottom up will either deter fraud by making it more difficult or require to fraudster to be more extensive in his fraud thereby decreasing the odds of being able to do it very often without being caught.

Comments for this post are closed