Sentences of interest

People whose results were closer to the fatal cut-off point of p=0.05 were less likely to share their data.

That is from Robert Trivers, about psychologists, source here.

Comments

Wow. Psychologists ("scientists") lie! Who knew?

(Is it Thomas Szasz who has made an analogous argument for a while?)

Maybe I'm channeling Deirdre McCloskey, but I don't understand why psychologists are obsessed with small p-values. A p-value of 0.005 is not ''better'' than a p-value of 0.025. It's mainly about sample size. A line has to be drawn somewhere, but if a paper has great results based on p-values ranging between 0.01 and 0.10 , I would still find it worthwhile.

Here's an idea: Before publishing, require that a replication be done by a reviewer, and if it passes muster and gets published, give the reviewer (no longer anonymous) a short publication too: a replication note with the reviewer`'s additional insights.

This guarantees that the review process will be made harder for the reviewer, and therefore that it will be harder to get people to review. It also requires funding, and sometimes more -- the number of people willing to build a cockroach lab to replicate Bob Zajonc's social facilitation work (from the 1950's) might be regarded as small.

"Maybe I’m channeling Deirdre McCloskey, but I don’t understand why psychologists are obsessed with small p-values"

Is this some kind of a joke?

"A p-value of 0.005 is not 'better' than a p-value of 0.025."

What? A result with the former p-value is five times more likely to be a real result than the latter. To my mind this is equivalent to saying that $100 is "not better" than $20.

I also think that "great results" have to be true. If they have up to a 10% chance of being false, that makes them less great.

TC,

This is the real cause of The Great Stagnation: (1) somebody cheats (consciously or otherwise) to get a publishable result; (2) nobody checks the result because they don't give funding or tenure or prizes or endowed chairs for replicating results by others and, in fact, if you find any cheating (especially at your own institution) you get fired and, on the off chance that anybody does check, the original researcher just doesn't make the data or code available making it impossible to check and the journal (not wanting to have a scandal to arise as to a result it published) doesn't force the release of code or data; (3) new research by a second reseacher based on the initial falsified research fails to achieve a publishable result; and so (4) the second researcher cheats (consciously or otherwise). Rinse and repeat and repeat and repeat.

This produces "science mal-investment" as more and more resources are consumed to obtain less and less in terms of actual results. "The Great Stagnation" says to solve the problem of TGS we must increase the social standing and material rewards received by scientists and University professors. That is wrong. In fact, it is almost exactly wrong.

The idea that a working lab can provide data on demand multiple years after publication is idealized beyond all recognition. Make them provide it at publication if you really care; otherwise, realize that it won't be available.

Modern labs have turnover. Grad students leave in three or four years. Postdocs leave in one or two. Programs get written for individual projects and left in home directories, where they are unmaintained and may be erased when disk space runs low. Compilers get updated and linking lines fail.

The published paper is the end product of a course of research, and should be treated as such. If you want to replicate a study, do your own (*&^(*ing work. If you want valid statistical tests, insist on them at publication.

In other words, garbage out. Not sure if it was intended, but what a damning statement.

The idea that someone would keep their word by storing a few digital files for a few years is idealized beyond all recognition?

"Make them provide it at publication if you really care; otherwise, realize that it won’t be available."

Exactly. In a digital age, I see little technical difficulty in demanding the raw data as a routine part of the submission for publication (thereby making it available to the paper's reviewers as well). That would have a real effect -- the promise to provide later, upon request, is evidently pretty meaningless.

I'd run the same test on natural scientists - biologists, chemists, chemical and biological engineers, materials scientists (less so in physics departments) - but do it based on whether or not the researchers report the number of controls done, versus just stating that controls were performed. Don't think that the natural science community is particularly rigorous.

Comments for this post are closed