A New FDA for the Age of Personalized, Molecular Medicine

by on June 12, 2013 at 7:20 am in Economics, Medicine | Permalink

In a brilliant new paper (pdf) (html) Peter Huber draws upon molecular biology, network analysis and Bayesian statistics to make some very important recommendations about FDA policy. Consider the following drugs (my list):

Drug A helps half of those to whom it is prescribed but it causes very serious liver damage in the other half. Drug B works well at some times but when administered at other times it accelerates the disease. Drug C fails to show any effect when tested against a placebo but it does seem to work in practice when administered as part of a treatment regime.

Which of these drugs should be approved and which rejected? The answer is that all of them should be approved; that is, all of them should be approved if we can target each drug to the right patient at the right time and with the right combination of other drugs. Huber argues that Bayesian adaptive testing, with molecular biology and network analysis providing priors, can determine which patients should get which drugs when and in what combinations. But we can only develop the data to target drugs if the drugs are actually approved and available in the field. The current FDA testing regime, however, is not built for adaptive testing in the field.

The current regime was built during a time of pervasive ignorance when the best we could do was throw a drug and a placebo against a randomized population and then count noses. Randomized controlled trials are critical, of course, but in a world of limited resources they fail when confronted by the curse of dimensionality. Patients are heterogeneous  and so are diseases. Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good:

Given what we now know about the biochemical complexity and diversity of the environments in which drugs operate, the unresolved question at the end of many failed clinical trials is whether it was the drug that failed or the FDA-approved script. It’s all too easy for a bad script to make a good drug look awful. The disease, as clinically defined, is, in fact, a cluster of many distinct diseases: a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or the good drug or cocktail fails because the disease’s biochemistry changes quickly but at different rates in different patients, and to remain effective, treatments have to be changed in tandem; but the clinical trial is set to continue for some fixed period that doesn’t align with the dynamics of the disease in enough patients

Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority. Some cocktail cures that we need may well be composed of drugs that can’t deliver any useful clinical effects until combined in complex ways. Getting that kind of medicine through today’s FDA would be, for all practical purposes, impossible.

The alternative to the FDA process is large collections of data on patient biomarkers, diseases and symptoms all evaluated on the fly by Bayesian engines that improve over time as more data is gathered. The problem is that the FDA is still locked in an old mindset when it refuses to permit any drugs that are not “safe and effective” despite the fact that these terms can only be defined for a large population by doing violence to heterogeneity. Safe and effective, moreover, makes sense only when physicians are assumed to be following simple, A to B, drug to disease, prescribing rules and not when they are targeting treatments based on deep, contextual knowledge that is continually evolving:

In a world with molecular medicine and mass heterogeneity the FDA’s role will change from the yes-no single rule that fits no one to being a certifier of biochemical pathways:

By allowing broader use of the drug by unblinded doctors, accelerated approval based on molecular or modest—and perhaps only temporary—clinical benefits launches the process that allows more doctors to work out the rest of the biomarker science and spurs the development of additional drugs. The FDA’s focus shifts from licensing drugs, one by one, to regulating a process that develops the integrated drug-patient science to arrive at complex, often multidrug, prescription protocols that can beat biochemically complex diseases.

…As others take charge of judging when it is in a patient’s best interest to start tinkering with his own molecular chemistry, the FDA will be left with a narrower task—one much more firmly grounded in solid science. So far as efficacy is concerned, the FDA will verify the drug’s ability to perform a specific biochemical task in various precisely defined molecular environments. It will evaluate drugs not as cures but as potential tools to be picked off the shelf and used carefully but flexibly, down at the molecular level, where the surgeon’s scalpels and sutures can’t reach.

In an important section, Huber notes that some of the biggest successes of the drug system in recent years occurred precisely because the standard FDA system was implicitly bypassed by orphan drug approval, accelerated approval and off-label prescribing (see also The Anomaly of Off-Label Prescribing).

But for these three major licensing loopholes, millions of people alive today would have died in the 1990s. Almost all the early HIV- and AIDS-related drugs—thalidomide among them—were designated as orphans. Most were rushed through the FDA under the accelerated-approval rule. Many were widely prescribed off-label. Oncology is the other field in which the orphanage, accelerated approval, and off-label prescription have already played a large role. Between 1992 and 2010, the rule accelerated patient access to 35 cancer drugs used in 47 new treatments. For the 26 that had completed conventional followup trials by the end of that period, the median acceleration time was almost four years.

Together, HIV and some cancers have also gone on to demonstrate what must replace the binary, yes/ no licensing calls and the preposterously out-of-date Washington-approved label in the realm of complex molecular medicine.

Huber’s paper has a foreword by Andrew C. von Eschenbach, former commissioner of the FDA, who concludes:

For precision medicine to flourish, Congress must explicitly empower the agency to embrace new tools, delegate other authorities to the NIH and/or patient-led organizations, and create a legal framework that protects companies from lawsuits to encourage the intensive data mining that will be required to evaluate medicines effectively in the postmarket setting. Last but not least, Congress will also have to create a mechanism for holding the agency accountable for producing the desired outcomes.

prior_approval June 12, 2013 at 7:46 am

‘that is, all of them should be approved if we can target each drug to the right patient at the right time and with the right combination of other drugs.’

This, of course, is impossible in the actual world we exist in.

‘can determine which patients should get which drugs when and in what combinations’

So, who is volunteering to be in the group establishing the baseline? Recognizing, of course, that such human testing violates currently accepted medical ethical norms.

‘some of the biggest successes of the drug system in recent years occurred precisely because the standard FDA system was implicitly bypassed by orphan drug approval, accelerated approval and off-label prescribing’

Well, Vioxx comes to mind – ‘The Justice Department said Merck illegally promoted Vioxx for rheumatoid arthritis before that use was approved by the Food and Drug Administration in 2002. The drug was initially approved in 1999 to treat certain types of pain. Drug companies are barred from promoting drugs for unauthorized, or “off label,” uses, though doctors may prescribe off-label uses.’ http://webcache.googleusercontent.com/search?hl=de&q=cache:bjBpKGJ5w5IJ:http://online.wsj.com/article/SB10001424052970204531404577054472253737682.html%2Bvioxx+off+label&gbv=1&ct=clnk

Somehow, I don’t think a system that promotes experimentation on human subjects (and how often is that with informed consent?) will long to be able to withstand the temptation that Merck couldn’t resist. Unless we change the laws of course – after all, what are a few deaths here and there compared to glorious new discoveries for the profit margin?

‘Almost all the early HIV- and AIDS-related drugs—thalidomide among them—were designated as orphans.’

To the best of my knowledge concerning Contergan, it is used for cancer and leprosy (though WHO recommends against it use for leprosy) – only a few researchers in the past seemed to think it had any applications concerning AIDS – http://en.wikipedia.org/wiki/Thalidomide#Possible_indications

But Contergan does suggest that some people are willing to pay a price in using it – ‘Thalidomide has been used by Brazilian physicians as the drug of choice for the treatment of severe ENL since 1965, resulting in 33 cases of thalidomide embryopathy in people born in Brazil after 1965.’ Of course, the ones paying are it are the children. But only 33 completely preventable cases, so really, what is the big deal? Especially if the mother signed a release form – thus ensuring it wasn’t the drug’s fault, it was the mother’s.

dan1111 June 12, 2013 at 8:11 am

“such human testing violates currently accepted medical ethical norms”

Human trials of not-yet-approved drugs already occur. That is how drug approval already happens. So, assuming these are consenting volunteers, it is not clear how this is ethically any different than current practice.

Right now there is a hard line drawn between clinical trials and other drug usage. Why? If an individual suffering from a disease is informed and willing to take the risks, why should they not be free to try an experimental drug? They are voluntarily choosing to do the same thing that a participant in a clinical trial is doing. It is especially outrageous in the case of extremely debilitating or terminal diseases with no treatment.

Jan June 12, 2013 at 8:44 am

” But we can only develop the data to target drugs if the drugs are actually approved and available in the field.” No, you do more clinical trials if you need to better target the therapy. FDA approves drugs with narrow labeling indications all the time.

One reason we have human research protections is so risks and unknowns are clearly communicated and patients are able to make the choice to be part the research. There is a framework to handle that process. IRBs are involved and oversee research primarily to protect patients. If you have a hypothesis about a drug that is also potentially toxic, you carefully design a protocol to test your hypothesis, watch for adverse events and make sure you don’t expose more people than is necessary to generate the needed evidence.

From what I can tell, this proposal just shifts what is usually done in clinical trials to the general physician practice setting, with no oversight. Dangerous stuff.

Gabriel June 12, 2013 at 8:56 am

Why not have “open trials”? Say, you allow anyone to join the trial at any point, as long as they follow certain rules (such as attending certain medical centers, having certain tests done and reported back to the online database which would be tracking all these things, etc). It would be better than the current system in almost every metric.

If we redesign the process then we can have much better innovations baked in, and you can add the safeguards that you want straight at the design phase.

Jan June 12, 2013 at 9:09 am

Don’t get me wrong, clinical trials need a lot of improvement. What we are doing ain’t cutting it, but I think you also need certain criteria for who can enroll in a trial and some semblance of a prototocol, because that is how you can control for confounding, etc. and produce good analyses. But at the same time, it is not necessary to have 1,000 inclusion and exclusion criteria, which often happens now.

I totally support expanding communication about clinical trials to more patients so they can find the studies and enroll in them. We really do need more people participating in research and our recruitment efforts are abysmal.

dan1111 June 12, 2013 at 9:06 am

“Just” shifting clinical trials to a general setting is actually no small deal. It would have a number of benefits:

It would allow trials of many more drugs, increase the power of trials, and lower trial costs. All of this means more beneficial drugs will be found and brought to market.

Adaptive testing would also allow for ongoing testing of drugs as long as they are used, rather than arbitrarily stopping trials once the “approved” condition is met. This would mean greater drug safety, especially testing of things that are hard to determine in clinical trials, like long-term effects and effects in pregnancy.

It would also give all people the freedom to voluntarily choose an experimental drug that may be beneficial, rather than only those who are able to get into a clinical trial.

Yes, it does raise concerns about oversight, which would need to be addressed. But those concerns have to be balanced against the potential benefit.

Jonathan June 12, 2013 at 9:07 am

What part of the curse of dimensionality don’t you get? As Alex and Huber are pointing out, “more clinical trials” are impossible under the current testing regime, which requires specification ex ante in a way that doesn’t allow retrospective calculations because of the multiple comparisons problem. For most diseases, there probably aren’t enough sufferers to design a study large enough to find a drug which helps exactly 10 percent of them.

Jan June 12, 2013 at 9:13 am

Observational research is a big business and getting bigger with more clinical data coming online. If that is what they are talking about, sure. But if you are talking about a pivotal trial for approval of a drug, you still need the same evidentiary standard as you have for other drugs, which is why randomization is usually required.

If it is a rare disease, then there are separate pathways for that, which are mentioned in the paper. It seems those approval pathways are serving their purpose.

Ken June 12, 2013 at 10:58 am

This, of course, is impossible in the actual world we exist in.

Recognizing, of course, that such human testing violates currently accepted medical ethical norms.

‘The Justice Department said Merck illegally promoted Vioxx for rheumatoid arthritis before that use was approved by the Food and Drug Administration in 2002.

These are ALL indications of a broken FDA and legislative system surrounding drug production, especially the first comment. It’s not impossible in the actual world. It’s simply not possible (legal) in the current system we’ve got.

Andrew' June 12, 2013 at 11:02 am

“This, of course, is impossible in the actual world we exist in”

Except that’s exactly what the FDA claims to do. They just fail by definition because they aren’t using genomics, etc.

Andrew' June 12, 2013 at 11:04 am

“So, who is volunteering to be in the group establishing the baseline? Recognizing, of course, that such human testing violates currently accepted medical ethical norms”

No it doesn’t. Try reading the post. Human testing is what people are already doing. It is just unacknowledged by the FDA except when they decide to acknowledge it. The people who are volunteering for it are the people with diseases lacking FDA coverage. It happens all day, every day.

Andrew' June 12, 2013 at 11:05 am

How do you people think drugs get tested?

Andrew' June 12, 2013 at 11:25 am

Seriously, I’m confused. A drug company does some theory and some testing. Then the drug ultimately has limited release to humans. Information is fed back. It’s called human trials. It is human testing. Despite one popular notion of the FDA, even, the testing continues even after approval (e.g. Vioxx).

Cyrus June 12, 2013 at 8:01 am

Counterpoint: in health maintenance, there is only midest regulatory oversight of which myriad combinations of diet, exercise, and other lifestyle choices a person may choose. Yet the actions we know lead to better health are the ones that are right for the population generally. (You really probably should stop smoking.)

It is completely plausible that there are different dietary regimens different individuals will thrive under, but there is not yet a well understood science of personal molecular nutrition, and it’s not regulation of foodstuff trials standing in the way. Sometimes multivariate statistics are hard, especially when half the variables are hidden.

Jonathan June 12, 2013 at 9:10 am

I don’t know that that’s so much a counterpoint as another example. Salt, fat etc. and the copious recommendations aimed at relatively small subsets are just another example of the same sort of error.

Cyrus June 12, 2013 at 10:56 am

The point is that free access to a variety of nutritional substances has not produced a revolution in understanding their effects on subsets of the human population.

Or put another way, we don’t yet have the experimental methods that will produce real knowledge about relatively unregulated nutritional substances.

So why should we believe the claim that we have the experimental methods that will produce real knowledge about drugs, if only they weren’t so tightly regulated?

Cliff June 12, 2013 at 2:01 pm

Because there is money to be made from drugs

Bender Bending Rodriguez June 12, 2013 at 9:37 pm

Why should someone stop smoking? Practically all the countries with better health outcomes at lower costs smoke more than the US. The Japanese smoke 80%(!) more cigs per capita than Americans and yet have a life expectancy 4 years greater than the US. They even spend about a third of what the US does on health care.

Forget Obamacare, we should mandate smoking!

Rahul June 12, 2013 at 9:15 am

My impression is this is a bit of futuristic speculation but not much that is immediately implementable. In theory, what he says is perfectly sound, no arguments.

If there was an easy way to know which half of the population drug A worked on and which half it damaged livers in do we think the drug companies wouldn’t be trying their best to tease this out in clinical trials itself?

Problem is, these biomarkers are rarely cleanly identified and the amount and cost of the testing regimen is quite prohibitive. When we cannot get it done right in clinical trials the chance that we can get the sort of data after we release the drug in the wild is remote.

All we’ll end up knowing is the drug worked in some and not in others without a good idea of what marker distinguishes the two populations.

dan1111 June 12, 2013 at 9:47 am

The idea of targeting to different groups based on molecular medicine is pretty far out there.

Adaptive drug testing (based on more traditional outcome measures) is immediately implementable, though.

Rahul June 12, 2013 at 10:54 am

Agreed. My problem is I’m still not exactly clear what exactly Huber etc.would like the FDA to do: per se the FDA does seem to be OK about adaptive trials, the disagreement seems about the details.

Specifically, Alex’s post is a bit one-sided: it doesn’t explore the nasty downsides of ambiguity in non-blind testing: This opens the field for Pharma to apply esoteric statistics to data mine some vaporous correlations (that always exist in a sufficiently large set of BigData based on biomarkers) that can then be exploited to push drugs of questionable efficacy.

Peter Schaeffer June 12, 2013 at 11:16 am

dan111, Rahul,

“The idea of targeting to different groups based on molecular medicine is pretty far out there”

What? It’s become common practice in some cases. See

“What to Do if Your Genetic Test Results Are Positive ”

“BRCA1- and BRCA2-related cancers often test negative for overexpression of the gene known as HER2/neu. This genetic abnormality is not inherited, as BRCA1 and BRCA2 mutations are, but can develop in women over time. When the HER2 gene is overexpressed, the cancer cells have too many HER2 receptors (human epidermal growth factor receptor). HER2 receptors receive signals that stimulate the growth of breast cancer cells. HER2-positive breast cancer is considered to be a more aggressive form of the disease, but it can be treated with Herceptin (chemical name: trastuzumab), a medication that targets HER2. Most BRCA1- and BRCA2-related cancers cannot be treated with Herceptin because they are HER2-negative.”

Note that I am not endorsing Alex Tabarrok theme or personalized medicine (for several reasons including cost).

Andrew' June 12, 2013 at 11:29 am

“Note that I am not endorsing Alex Tabarrok theme or personalized medicine (for several reasons includi

Please explain, considering directing correct treatments might be the only way to have high ROI.

Angelina Jolie is a believer.

Zaq June 12, 2013 at 12:05 pm

“Angelina Jolie is a believer”
Very reassuring.

Rahul June 12, 2013 at 12:10 pm

“Angelina Jolie is a believer” …….and her cost-benefit trade-offs are similar to mine.

Andrew' June 12, 2013 at 12:17 pm

What do you mean?

Andrew' June 12, 2013 at 12:24 pm

I could ask “if even Angelina Jolie gets it, why don’t you?” but I won’t.

If you call something “breast cancer” and to be “effective” a treatment has to work all the time you by definition will never have a treatment (except pre-emptive mastectomy I suppose) because there is no such thing (under a certain point of view) as “breast cancer.”

Peter Schaeffer June 12, 2013 at 4:31 pm

Andrew’

I would reply to your question, but other folks have beaten me to it.

Note that even preemptive mastectomy is (apparently) only 90% effective.

Let me just add that America needs a medical system optimized for the cost-benefit tradeoffs of the median citizen, not the 0.01%. As best I can tell, personalized medicine is pushing us further in the wrong direction.

dan1111 June 13, 2013 at 12:20 pm

After reading most of the report, I no longer think it is that far out there.

Much more is already going on than I realized.

heurmann June 12, 2013 at 11:26 am

The solution is a not a “new” FDA.

The solution is no FDA at all.

The FDA has always been an obstacle, not an aid to medical advances. And, of course, it has no Constitutional basis to exist at all.

But somehow we ‘need’ Potomac experts to forcibly control everything…. progressives can never see the obvious.

mulp June 12, 2013 at 10:41 pm

There should be 50 FDAs?

Don’t you think that would mess with Interstate Commerce?

Wouldn’t California FDA define what drugs are legal for the nation?

Or are you thinking Texas would allow anything, radium tincture, curative mercury baths, and all the other stuff that paid Wolfman Jack’s wages? Goat gonads implanted to cure baldness, cancer, …

I hope you are opposed to those who call for Federal “malpractice reform” to be consistent. Civil torts are pretty much local, so State law is most appropriate.

Limiting drug development to one State is inefficient, on the other hand.

Dan Weber June 12, 2013 at 11:04 am

I think “out there” is right. Presumably we eventually will do it, but not in the next few years.

But a drug that reduces that chance of disease 1 by 50% while quadrupling the risk of disease 2 should be approved. People who are at very high risks of disease 1 relative to disease 2 should take it. It’s up to doctors to determine if their patients belong in that group.

All drugs have side effects. Aspirin probably wouldn’t survive today’s FDA.

mw June 12, 2013 at 9:26 am

The most obvious answer here is, if Bayesian drug targeting is an effective treatment, *it too* can undergo controlled testing.

In any event, throwing the word Bayesian on something doesn’t make it optimal when you’re heuristically making up the priors. But if it IS good, then of course it should pass testing muster.

Aaron June 12, 2013 at 9:34 am

I am a big fan of this blog. However, these posts on drug approval are consistently the least thoughtful and most devoid of rigor of the bunch.

Andrew' June 12, 2013 at 11:28 am

Explain your opinion.

Aaron June 12, 2013 at 12:19 pm

The posts ignore a series of major problems.

1) How exactly to establish a causal link between a drug and a subgroup effect without randomization. Selection bias will be huge. Just saying “it’s Bayesian!” doesn’t solve anything, nor does saying “with molecular biology and network analysis providing priors”. Empirical economics doesn’t do this kind of analysis for a reason. You need an identification strategy.

2) How to define appropriate subpopulations, or how to avoid an enormous number of Type I errors from multiple hypothesis testing of a single drug against a nearly infinite combination of phenotype subgroups.

3) How good is medicine already at targeting evidence-based treatments to even SIMPLE subgroups (men vs women, elderly vs nonelderly, etc.)? Answer: not very good. Will adding a zillion choice combinations really help patients get the right care, or will it just lead to an enormous amount of overutilization as patients and physicians try an ever-expanding list of medications that haven’t been shown to be useful on average. Already, there is the sense within medicine that “I know this treatment doesn’t work in general, but I have reason to believe it will work on my patient because my patient is special”. Further enabling this pattern of behavior will be dangerous.

4) How have drug/device companies behaved when it comes to pushing drugs that not helpful on average, but that some research shows some subgroup effects? Answer: badly. Drug companies only have one margin for profits: sell more of the drug. And that’s what they do.

While I’m sure the regulatory apparatus could use re-tooling to better evaluate the effects of drugs on certain subgroups, the changes advocated in this blog are way too radical, and are advocated as a simple solution.

Andrew' June 12, 2013 at 12:34 pm

1. Doctors give people drugs and when they don’t work they give people different drugs. All Alex is talking about is capturing the data that hints at positive and negative effects. Selection is almost irrelevant.

2. Again, it’s just capturing the inputs and the outputs. They (can) genotype tumors prior to treatment and it is hard to conceive of any other way to do it. How is that different?

3. Again, you don’t distinguish between men and women. You capture that men/women have greater/lesser effectiveness and greater/less safety. Now you can distinguish. As we run out of simple small blockbuster molecules it is again hard to conceive of an alternative path forward.

4. This is what doctors are for. They do a decent job.

This is one of Alex’s specialties. Nothing is implied as being simple, as I read it. It will be an enormous undertaking, but one we have little real choice in. What is simple is the popular notion that the FDA “approves” drugs because they are safe and effective.

Aaron June 12, 2013 at 1:17 pm

“Capturing the data that hints at positive and negative effects” is not at all sufficient. Selection is a huge issue. A “hint” of a positive or negative effect is not strong enough evidence. This is why econometrics exists, and why clinical trials exist, why control groups exist. Trials actually determine drug effects through randomization. The reasons economics is such a strong empirical discipline is that it limits its observational studies to cases where the effect of randomization can be mimicked. There is no way to do this on a large scale for every drug-subgroup population.

We know that certain drugs work well for certain cancer genotypes by studying those drugs for those genotypes in a randomized controlled trial. It is our gold standard of evidence. There is no substitute that will work across the board.

Andrew' June 12, 2013 at 2:16 pm

If drugs were free and “safe” then would a hint of positive be enough? Is it just a question of where you draw the line?

I think Alex’s point is that what we think is randomization is completely not randomization. For example, what we call breast cancer is not. What we call a breast cancer patient is really many different populations. That is why to have any cancer treatments at all, like HIV treatments, we already circumvent the popular notional function of the FDA.

Andrew' June 12, 2013 at 2:20 pm

By the way, if you have terminal cancer, a hint of positive would be enough.

Andrew' June 12, 2013 at 2:39 pm

I suspect we are barking up two different trees. If you don’t change the hurdle of “safe and effective” then it may be true that you can’t do what Alex is suggesting.

However, I think it is just that hurdle that forces drug companies to go for blockbusters. What we should have instead, is the ability to do microtrials, and then expand incrementally quite quickly rather than putting the huge hurdle at the very start of the race.

Partly this is already happening and will continue to happen as we continue to develop the diagnostics and genomic tests. The bottom line is that at some point you have to give the drugs to people and one of the outcomes of the drug trials will be to refine differential results. Sometimes you don’t know you have a different population until they respond differentially.

Seb June 12, 2013 at 3:53 pm

Aaron makes some valid points about the danger of following every statistical blip but he misses two issues. First, Aaron agrees exactly with Tabarrok when he writes about randomized studies:

“There is no way to do this on a large scale for every drug-subgroup population.”

Correct. Thus because of heterogeneity drugs are always tested in the field, like it or not, and, if we are to gain the benefits of personalized medicine that respects heterogeneity, we have to use other methods.

Second, what Aaron misses is that with more knowledge classic testing becomes less necessary. The more we understand the microbiology of disease and the body, for example, the less we need to test. More knowledge makes medicine more like engineering.

Rahul June 12, 2013 at 1:35 pm

@Aaron

I agree with most of what you wrote. But for a minute, I’ll play devil’s advocate about your #3:

Say, you are predicting Loan Default rates, it’d be hard to do it well based on a single simple characteristic (e.g Male vs Female) but easier when you combine many parameters (age, sex, income, children, education etc.). Why may the same thing not be true about medicine?

Couldn’t conceivably, a treatment-model that uses 20 parameters succeed in doing a better predictive job than a simple sub-group dichotomy?

Aaron June 12, 2013 at 3:55 pm

Adding more variables to a model always makes the model fit or “predict” better. However, that happens even if what you are adding is total randomness. (This is a fun trick for a statistics class.)

This means that 1) a model that fits better does not necessarily help establish a causal connection between a treatment and an outcome and 2) a model that has a million variable combinations in it is more likely to unearth false positive associations between treatments and outcomes.

For instance, imagine trying to model whether income is associated with characteristics of a person’s name. You could add a million variables to this model (does the name rhyme? does the name have alliteration? does it start with the letter A? is the name Bill Gates? etc.). It would be a tight fitting model. If you used that model to determine what to name your kid, you might come away thinking that Michelangelo is a good name (because the couple of people named Michelangelo in your sample were rich) or perhaps that naming your kid “Bill Gates” would make him very rich.

Of course, that approach is nonsense, because data in and of itself doesn’t uncover causality. To get that, you need data to be generated in a certain way, and you need your analysis to reflect the way the data were generated.

Rahul June 12, 2013 at 4:11 pm

When I said predict better I meant evaluating using something like a confusion-matrix or a ROC curve. Something that penalizes both false positives and false negatives. A metric that rewards both accuracy and specificity.

So far as your validation set is distinct from your training set, I fail to see how just throwing more variables at a problem will make it fit better in a trivial sense.

Maybe your overfitted, model-using-everything will predict that “Michalangelo” is a high income name. But shouldn’t waiting for a decade and testing it on an independent validation set immediately prove your model to be quite bad.

Maybe I am wrong, I’m not an expert.

Andrew' June 12, 2013 at 4:32 pm

Couple things: First, false positives aren’t really a problem in a constantly updating system. FDA approval doesn’t really mean that the drug is always effective. If a doctor prescribes it and a patient is disappointed in the results there is nothing that says that they have to stay the course. That information can get fed back to the system since every usage becomes part of the ongoing human trial.

Second, the system would change. For example, there is no reason to assume you would have to stick to the doses that are used now. You could drop the dose to something that is not universally effective but is generally safe if the drug happened to work synergistically within a cocktail.

Knowing for sure that it is universally safe and effective is not necessarily the correct goal. The goal is probably just getting it to the point where it is safe enough to expand the market testing, like most other products on the market.

Willitts June 13, 2013 at 12:32 am

I’m glad you saved me the trouble of all that writing.

A lot of humans died trying to eat various things before the species learned, by knowledge or custom, what to avoid and what to consume and how to consume it. A foodie ought to know that.

If you believe most scientists, we evolved in a world that was indifferent to our survival. Our odds were slim, and that we are here now is a matter of adaptive optimization. As soon as we start to engineer an ethic to accompany this process, we begin to become squeamish about risk and begin to add up our losses without an eye toward the benefits. We can’t even agree which bad people to blow up on the other side of the planet.

In evolution, fortune favors the bold but a million species die out in the attempt. We could easily kill more people than we can ever hope to save with clinical trials gone wrong from perverse incentives, self selection, emotional biases. Evolution is a process of trial and error, emphasis on the error.

Sampling from the population with a destructive sampling method is a cost seldom considered by people with Ph.D.s. I’m not saying we shouldn’t legalize all three of these drugs, but I agree with Aaron that the adaptation is easier said than done. Political will can wither away before the battle is won.

Mike Huben June 12, 2013 at 9:43 am

The transition from low-information medicine to high-information medicine will involve much more than changes to the FDA.

Almost every “alternative” treatment can be analyzed and evaluated this way, and likely to the great protestation of its advocates.

Environmental responsibility for disease will be clearly demonstrated, with concurrent demands for changes to polluting industry, harmful workplaces, and home conditions.

Human-directed medical treatment will largely disappear (excepting surgery, first aid and convalescent care) except to help patients decide what state they want their bodies to be in.

Andrew' June 12, 2013 at 11:38 am

One interesting thing is that one of the “biomarkers” is that the patient has to “have” “the disease.” This goes against theories of addressing anti-aging. The entire medical system, not just the FDA is based on this fundamental error.

MIke Hammock June 12, 2013 at 12:29 pm

Wait, why do we want to “create a legal framework that protects companies from lawsuits”?

Andrew' June 12, 2013 at 12:40 pm

Aren’t you referring to a different news story? The one you refer to is where the government wants to invade our privacy based on something that kills a hundred people a year. What we are talking about is what kills EVERYONE ELSE!

Aaron June 12, 2013 at 4:06 pm

@Seb, you make a very interesting point about theory substituting for empirical rigor in drug development. I think this idea is valid, but not ready for prime time.

If you talk to a drug developer or a clinical scientist, you’ll hear that an ENORMOUS number of drugs that worked in theory, worked in mouse models, and worked in safety testing just didn’t work in people.

As an example from the device world, consider inferior vena caval filters for prevention of pulmonary embolism (sorry for the medical vocab here). In my mind, nothing could be more straightforward from an engineering perspective. The problem: blood clots are entering peoples veins and going into their lungs, killing them. The solution: put a filter in the vein leading to the heart so that big clots can’t get through the heart to the lung. Sounds great, right? Wrong. Turns out that people had just as many pulmonary embolisms in RCTs with the filters. Why? Maybe the filters caused blood clots too. It’s one of many counterintuitive results from the literature.

Carlos Carmona June 12, 2013 at 6:56 pm

They are using http://cryoair.com equipment for the new needs of medicine. Hope they find new miracles.

mulp June 13, 2013 at 12:19 am

When half the doctors are operating as blacksmith’s fighting a Henry Ford transformation, and the FDA assumes a Henry Ford doctor, Alex is arguing the FDA should assume a highly automated factory with adaptive production reacting real time to both quality metrics and refinements in the processing.

The outcome expected is the blacksmith doctor quickly turning out high performance roadsters built from titanium and carbon fiber medical treatment.

Doctors today can’t even use the drugs available today effectively, so adding hundreds more that are known to have lots of negative side effects when the doctors are taking much safer and pretty well understood drugs and getting bad outcomes from drug interactions sure seems like a bad idea.

If the drugs are limited to dispensing from places like the Cleveland and Mayo Clinics with significant investment in research, big data, process, team practice, etc, then maybe it could add value, Except those places can readily get the FDA waiver, which has the proviso that the drugs aren’t profit making and can’t be marketed.

The interesting issue you raise, but don’t get, is who should “own” the knowledge from data mining patient records, especially re drugs.

If you are suggesting the FDA approve all drugs past a limited set of regulatory checks and then let doctors try them out, then the drug companies will not own anything other than the drug formulation and manufacturing, and the doctors will own the knowledge of what drugs and how to use them to treat patients. Only doctors would have the commercial right to have ads “if you have origins in Cameron within seven generations ask your doctor whether the Mayo regime for preventing stokes, delta-z, applies to you”.

mulp June 13, 2013 at 1:22 am

I note the basis of much of the proposal is based on EuResist which works with: “The EIDB integrates biomedical information from the three founding national databases: ARCA (Italy), AREVIR (Germany), Karolinska Institute (Sweden).” Plus additional entries from other nations, with a total of about 50,000 HIV patient records.

Obviously, a large project to get extracted data from national databases, when the US has nothing close to a national database, and combined between nations in the EU. Note the EU has stricter privacy laws than the US does, so HIPPA can not be called an obstacle.

Within the US, commercial interests fight against the kinds of databases that are national in the EU. And within the EU, these databases are moving to international standards so the software is standard, And thus cheap.

Doctors in these nations with national databases all use the national databases. Imagine the screams of protests from doctors who are screaming in protest over Obamacare for requiring the use of some standards for submitting claims, with subsidies for small practices to buy the software.

There is also a strange idea that for profit corporations will be able to accomplish what the EuResist project did on a large scale to drive down costs because they will control the access to the results and charge high fees to earn high profits – turning a $1 pill into $100 in revenue, I’m sure. Well if the drug companies are going to make money on the patient data, then the doctors are going to charge for copies of the data,but should patients get a cut of the price of their data?

Note the EuResist project was dealing with existing HIV drugs – drugs already approved.

The FDA is not the problem in my view, but the lack of a global standard for patient data, and then national databases used by all doctors and hospitals.

Kirk June 13, 2013 at 8:08 am

Huber is mainly right. Set out below is a concrete example of data that proves the point that in fact targeting can be done today for some drugs and diseases. Is it perfect? No. Is it darned good? Yes, sometimes. In the example below, 96% percent of persons with a specific cancer and specific biologies received major benefits, such as not dying. If you otherwise face a death sentence, do you try a targeted drug even if knowledge is imperfect ? Probably – a personal choice the FDA should not be blocking.
A related point is this. Today, ordinary oncologists CANNOT and DO NOT deliver the best life-saving care because doing so requires highly expert teams that do not – in general – exist in Fargo but do exist at places like MD Anderson, Mayo and Sloan Kettering.
The data below is from Seattle Genetics at http://investor.seattlegenetics.com/phoenix.zhtml?c=124860&p=irol-newsArticle&ID=1826058&highlight=

Phase III Trial of Brentuximab Vedotin Plus Doxorubicin, Vinblastine, and Dacarbazine (A+AVD) Versus Doxorubicin, Bleomycin, Vinblastine, and Dacarbazine (ABVD) as Front-line Treatment for Advanced Classical Hodgkin Lymphoma (HL) (Abstract #TPS8612)

Recent phase 1 data presented at the 2012 American Society of Hematology (ASH) Annual Meeting demonstrated that A+AVD, which removes bleomycin from the standard frontline ABVD regimen, was associated with a manageable safety profile and a complete remission (CR) rate of 96 percent in the treatment of newly diagnosed HL patients. A global phase 3 study, called ECHELON-1, is an ongoing open-label, randomized, multi-center trial designed to investigate A+AVD versus ABVD as frontline therapy in patients with advanced classical HL. The primary endpoint is modified progression free survival (mPFS) per independent review facility assessment using the Revised Response Criteria for malignant lymphoma (Cheson, 2007). Secondary endpoints include overall survival (OS), CR rate and safety. The trial is being conducted in North America, Europe, Latin America and Asia. The study will enroll approximately 1,040 eligible patients (approximately 520 patients per treatment arm) who have histologically-confirmed diagnosis of Stage III or IV classical HL and who have not been previously treated with systemic chemotherapy or radiotherapy.

Lamotrigine Side Effects June 23, 2013 at 3:18 pm

Hi there! This post couldn’t be written any better! Reading through this post reminds me of my good old room mate! He always kept chatting about this. I will forward this article to him. Pretty sure he will have a good read. Thanks for sharing!

Comments on this entry are closed.

Previous post:

Next post: