Clyde Schechter defends IRBs (from the comments)

This is not my view, but I am happy to present an alternative perspective for your consideration:

Yes, IRB’s sometimes do ridiculous things. But I served a total of 21 years on the IRB’s of two different institutions, and I’m sure I can match you anecdote for anecdote with obviously dangerous study protocols submitted by investigators, or protocols where the associated consent documents were blatantly misleading or so confusing that even professionals couldn’t understand them. It’s a small minority of submissions, to be sure, but it’s a recurring problem.

In my experience, most protocol delays in IRB review boiled down to issues of clarifying ambiguous language or providing additional background information so that the appropriateness of the proposal can be better assessed. I suspect that much of that could be avoided with better training of investigators on how to write their submissions. At one of the institutions where I served, my Department encouraged junior investigators to “pre-clear” their IRB submissions with me or another Department member who also served on the IRB. We were often able to spot the things that would likely catch the IRB’s attention and help those investigators revise their protocols before submitting them so that they would sail through approval without delays on the first try.

In my view, no person should ever be the judge of his/her own cause. There is nothing in the earlier rules, nor in the modified ones, that prevents an IRB from expediting the review of social science projects that plainly involves little or no risk. Such protocols can be turned around by a staff member in a day or two. But it should never be left to the investigators to make those assessments on their own.

Here is the link of origin.


Clyde nailed it. Self-policing doesn't work.

'In my view, no person should ever be the judge of his/her own cause.'


'This is not my view, but I am happy to present an alternative perspective for your consideration:'

Spot the director of a public policy institute devoted to influencing public debate.'

I agree with Mr Schechter. I have served on the same "IRB" in a major public teaching/research hospital in Melbourne for about 15 years as a "community member". We have a variety of research proposals to review every month; most clinical but many in the social sciences, too. The IRB gets through some 30-50 reviews each month, most but not all from the hospital and its many associated entities.

We have a variety of levels of review from "low risk" (one or two reviewers, out of session, expected turn-around less than 5 business-days) for non-intervention or quality-control/record-review studies to full reviews for e.g. First in Human (FIH) trials that involve both clinical evaluation and separate evaluation of participant information and consent (typically 4-10 weeks from submission to approval). FIH trials face the same delay, typically, but also parallel external expert review.

The hospital charges for ethical approval of studies with (much) higher fees for commercial trials than for investigator/clinician trials; so we get at least some market signals. The market is somewhat competitive: commercial sponsors can shop-around clinical institutions in Australia, and do. There are 6 or so institutions on the East Coast that host multi-center, international, and FIH trials of drugs and clinical interventions. We have to respond every couple of years to complaints about delays etc. mostly passed to us by the Hospital Board (who hear from commercial sponsors as well as trial participants).

I agree with Mr Schechter that delays and costs in the ethical review procedure in the institution I serve are more often than not due to badly prepared/expressed/designed research proposals, a lack of respect for (or disinterest in) ordinary human values and to the burdensome overlay of non-review regulation that researchers (reasonably) find dispiriting. Delays in ethical review, once it starts, are exceptional; not the rule. Less than 5 percent of reviews go to the Committee — which meets monthly — more than once. Almost all low-risk reviews are cleared within the one-week timeframe. But since a large number of these are student projects they have a higher resubmission rate than full reviews.

We try hard to be responsive to researchers. No one (I think) serves for many years on an IRB because they have some sort of fixation with ethical standards. Most of us continue to serve, year after year, because we are strongly interested in promoting research and fascinated by the ideas and innovations we see. We put a lot of weight on the autonomy of adult participants to consent to even ugly things in some circumstances so long as they competent and well-informed (that's my biggest worry).

I should add that in 15 years I have seen only a small hand-full of proposals that were ethically out-of-bounds. In most cases of delay, a researcher or sponsor has simply not considered how the mechanisms of recruitment or participation or the provision/treatment of patient information would work with real people involved. Researchers have dumb ideas as well as great ideas. Sometimes their dumb ideas are about the people who will be helping them with their project by accepting their trial intervention. Those are the dumb ideas we try to catch and help the researchers fix so the trial can go ahead without either the Hospital Board or the Media hearing any more about it..

With respect, I wonder what world you live in where 10 weeks (or more) is a reasonable amount of time for an IRB to complete? I also wonder if your charge schedule has been subject to an independent ethical review, I'm guessing it has not.

Only a very small proportion (~5%) of reviews take take more than one 4-week cycle for review. Most involve either difficult external expert review (typically, 'first in human' drug or device trials where the relevant expertise is hard to find) or, less often, re-submission by investigators who declined to make changes first requested by the Committee. Charges for ethical review of commercial trials are set by the hosptial administration to cover the costs of administering the reviews. The main control on the charges is competition for the research spending of commercial sponsors running clinical trials. Since the hospital is a public entitiy, it is probably answerable to the State Government for its charges (but competition with other insitutions in this city and others seems to be more effective). Unsponsored, investigator-led and student studies face much lower charges or are reviewed without charge. I live in Melbourne, Australia.

There is a very important historical perspective to be introduced.

Originally, IRBs were presumed to be composed of investigators, with some representatives of the community at large, and some specialists (like statisticians) to consult.
Investigators generally don't like paperwork and bureaucracy; and they were given broad discretion to determine whether review was really needed or not, so that people
couldn't go out and determine that their own work was free from the need for review - for obvious reasons.

Over time, the investigators proved so allergic to paperwork that they used overhead funds to hire IRB professional administrators, who love paperwork. Now, these
professionals have a professional pride in their craft, and see its importance clearly - which tends to expand their scope. But as cost rather than profit centers, the
IRBs found themselves subject to scrutiny, and in tight times, jobs might disappear. To control overhead and justify having assistants, etc, institutional administrators
set the budget of these 'human subjects protection' offices based on the number of protocols reviewed. This is both utterly sensible and a huge problem.
By making waiver determinations, the IRB offices were doing unpaid labor. But it doesn't make sense to pay them more for a waiver than for a full or expedited review...
This is a trap, simply put. The IRB will grow in scope and intensity almost monotonically, until it crushes the investigators - not just the ones who were originally
subject to review (like investigative new drugs), but all investigators in almost any field... The fees I have seen are ~$1,500-$3,000 per protocol, which is not at all
insignificant for a small social science project, a survey, or something similar.

There is only one solution: take scope determination out of the hands of the IRB. Determining 'jurisdiction' must be done independently, and with very low overhead.
Blanket statements are good - 'interviews with single subjects to write a monograph on the life of that person or his or her friends, relatives, and colleagues is not human
subjects research (because it is not producing generalized knowledge!)' - but at the margins, there may be some details worked out. There needs to be a mechanism
which doesn't further bloat bureaucracy.

As for all the reasonable defenses of IRBs (and IACUCs, for that matter) - of course! Protocols are often improved and preregistration of studies makes them better, on
the whole! Having an IRB review a study is important! There are ways to cut the grease and red tape - one IRB for a whole multi-center study, for example - but overall,
having 'some' IRB do the review is important.

For those that question the need for IRBs:

The replication crisis has bitten the world of psychology as hard as everywhere else. Nosek's study showed that "Over half of psychology studies fail reproducibility test". This must be due to, in differing degrees, some combination of negligence, fraud, and or outright incompetence on the part of the investigators in designing, conducting, and evaluating their work.

But the one thing you think these investigators CAN be trusted to get right *every time* is the ethical component of designing their research.

I don't think IRB approval does anything to resolve the replication problem. It can't stop outright fraud (i.e., making up the results once IRB approval is given). And it doesn't solve the so called file drawer problem. Due to random noise, 1 in 20 experiments will have a statistically significant result. If people only try to publish their statistically significant results and put the other ones in their file drawer, then lots of experiments with spurious effects will get published.

Suppose we are just focusing on social science, and not doing anything outside normal life activities, even so yes there will always be some risk, just like there's risk to other people when I get into my car to drive to the supermarket to buy a gallon of milk. Do I need a committee to review whether I should be allowed to drive to the supermarket?

I guess I'm saying there is a scope problem if any degree of risk whatsoever mandates IRB review.

Do you want a guarantee your taxi driver has a driver's license before you let him drive you to the supermarket? It's not like researchers are experimenting on *themselves*.

The idea that IRB needs to exist for social science (which I agree with) and the idea that IRB for social science is too onerous (which I also agree with) are not mutually exclusive.

I've submitted to IRB many, many times. It's all innocuous stuff (the only full review I ever had was when I wanted to survey minors) but there's always something they want changed. Very rarely is it anything that would actually improve subject safety, and so from that perspective it's a waste of time and I should be getting a rubber stamp without much concern about the details. More often it's something that would provide the U some minor legal protection, which it seems to me is the real underlying reason for the existence and funding of IRB.

This. My IRB experience is that they're more concerned about boxes being checked and engage in mission creep. It would be nice if the applications focused more on the ethical and safety issues that justify their existence.

Comments for this post are closed