Results for “fda” 324 found
The Good News on the FDA and ANDAs
Yesterday, I pointed out that generic drug prices are falling. So what accounts for the small number of large price increases in the generic drug market? It’s a combination of market shenanigans, supply shocks, and FDA delay.
The markets where price increases have been large tend to be relatively small. Daraprim, for example, is only prescribed some 8-12 thousand times per year in the United States. The small size of these markets is no accident. Keep in mind that whatever one may think of Shkreli, he did show a kind of entrepreneurial genius in scouring the universe of drugs in the United States to select one where monopoly power could be so effectively exploited. Shkreli found a market where 1) the total size of the market was low so there wasn’t much competition but 2) the drug treated a serious illness and 3) there wasn’t a good substitute so the value of the drug to the small number of patients was very high.
In addition, Shkreli knew that he had at least a 3-4 year window of opportunity to exploit monopoly power. To compete with Daraprim a competitor would have to submit an Abbreviated New Drug Applications (ANDA) to the FDA. Despite the name, Abbreviated, it costs at least five million dollars to go through the process and right now there is a backlog of nearly 3,000 ANDAs at the FDA’s Office of Generic Drugs. In recent years, it has taken 3- 4 years to get a generic drug approved. The cost is too high and the delay too long.
(I am focusing on the standard route to market entry and ignoring the possibility of importation or compounding which I discussed earlier. I’m also ignoring that Daraprim is unusual in that it was approved in 1953 before the current FDA system of safety and efficacy trials, and the FDA is being absurdly cagey about whether they would allow a simple ANDA for Daraprim. I may write about that in a future post– see here for a related case.)
So what’s the good news? In 2012 Congress passed the Generic Drug User Fee Act (GDUFA). Modeled after the very succesful PDUFA, the act earmarks fees paid by generic drug manufacturers to the FDA’s Office of Generic Drugs. As a result of those fees, the FDA has hired more reviewers and they are rapidly reducing the backlog. That’s the first piece of good news.
A second piece of good news is that FDA delay isn’t the only cause of the backlog. Another cause of the ANDA backlog was an unexpected increase in the number of ANDAs. I would have been much more worried if the number of ANDAs had decreased. Despite new user fees and some increase in regulation the increase in submissions is evidence that the US generic market is competitive, vibrant, and profitable.
The generic drug market in the United States has been very successful. We are constantly told, for example, that US pharmaceutical prices are the highest in the world and that is true for patented drugs but generic drug prices in the US are among the lowest in the developed world and most prescriptions are of generics.
We can address the price hiccups in the generic market by opening up to more world suppliers, speeding up the ANDA process and keeping costs of entry low. Overall, however, we shouldn’t let the price hiccups detract attention from the fact that the generic drug market is competitive, vibrant and thriving and we want to keep it that way.
Economists on FDA Reciprocity
Daniel Klein & William Davis surveyed economists about whether it would be an improvement to reform the FDA so that “as soon as a new drug is approved by any one of five [FDA approved international] agencies, that drug automatically gains approval in the United States.” They report:
Of the 467 economists who answered the question and did not mark “Have no opinion,” 53 percent agreed that the reform would be an improvement, while 29 percent disagreed. (The remainder said they were “neutral.”) Moreover, those favoring the reform were more likely to say they held their belief “strongly.” Hence, the balance of economist judgment certainly leaned in favor of the liberalization.
Economists are not the only ones in favor of reciprocity. Others are also coming around, at least partially. In Generic Drug Regulation and Pharmaceutical Price-Jacking I argued in response to the massive increases in the price of Daraprim (generic name Pyrimethamine) that we ought to allow importation:
Pyrimethamine is also widely available in Europe. I’ve long argued for reciprocity, if a drug is approved in Europe it ought to be approved here. In this case, the logic is absurdly strong. The drug is already approved here! All that we would be doing is allowing import of any generic approved as such in Europe to be sold in the United States.
In a paper in JAMA discussing the same case, Drs Jeremy Greene, Gerard Anderson, and Joshua M. Sharfstein agree, writing:
A second option is to temporarily permit the importation of drug products reviewed by competent regulatory authorities and approved for sale outside the United States. For example, Glaxo, the original manufacturer of pyrimethamine, sells a version of the drug approved for use in the United Kingdom at less than $1 per tablet.
Dr Sharfstein by the way was Principal Deputy Commissioner of the US Food and Drug Administration from March 2009 to January 2011.
Addendum: I will be discussing/debating pharmaceutical policy with Dr. Sharfstein at on event sponsored by the Council on Foreign Relations in Washington, DC the morning of Monday January 25. Invitation only but email me if you want an invite.
The FDA and Magical Thinking
Vox had a piece yesterday on the Cruz-Lee proposal to make it easier for U.S. patients to access drugs and devices already approved in other developed countries. The Vox piece had some howlers. Most notably this:
“There’s no evidence the FDA blocks innovation or makes innovation harder or makes it more costly,” said Kesselheim.
Frankly, that would be laughable were it not coming from a professor of medicine at Harvard Medical School. It costs well over a billion dollars to get the average new drug approved and much of that cost comes from FDA required clinical trials. Longer and larger clinical trials mean that the drugs that are eventually approved are safer. But longer trials also mean that good drugs are delayed. And the more expensive it is to produce new drugs the fewer new drugs will be produced. In short, longer and larger trials mean drug delay and drug loss.
We live in a world of tradeoffs. Let’s debate the tradeoffs. But let’s not engage in magical thinking where there are no tradeoffs and “no evidence” that the FDA makes drug development more costly.
A more subtle error was committed by the author who writes:
But it’s not clear that this legislation can solve the biggest problem here — the lack of promising treatments in the pipeline. In other words, a faster approval process can’t fix a dearth of innovation from labs themselves.
Many factors go into drug development that are outside the FDA’s purview. Nevertheless, faster drug approval can and does increase innovation. Approving drugs more quickly is equivalent to a decrease in the costs of research and development. Time is money. Reducing the cost of development increases the incentive to develop new drugs.
The Prescription Drug User Fee Act, for example, reduced drug approval times by about 10 months. Philipson et al. calculate that:
…the more rapid access of drugs on the market enabled by PDUFA saved the equivalent of 140,000 to 310,000 life years.
(PDUFA does not appear to have materially affected safety but Philipson et al. calculate that even under a worst case scenario the benefits of PDUFDA far exceeded the costs).
Moreover, Vernon et al. find that the reduction in approval time from PDUFA increased new drug development:
Controlling for other factors such as pharmaceutical profitability and cash flows, we estimate that a 10% decrease (increase) in FDA approval times leads to an increase (decrease) in R&D spending from between 1.4% and 2.0%. Combining this estimate with recent research on the link between PDUFA and FDA approval times…we calculate PDUFA may have incentivized an additional $10.8 billion to $15.4 billion in pharmaceutical R&D. Recent economic research has shown that the social rate of return on pharmaceutical R&D is very high; therefore, the social benefits of PDUFA (over and above the benefits of more rapid consumer access) are likely to be substantial.
Finally, return to the issue of reciprocity. Many of the critics of reciprocity respond with simple appeals to nationalism. We are the best! Rah, rah, rah! But if the critics were German or French they would argue that the EMA is superior to the FDA. Indeed, when I raise the issue of reciprocity with Europeans they respond in exactly the same way as Americans. How could anyone suggest that the EMA automatically approve drugs approved by the FDA! The horror.
The argument for reciprocity, however, isn’t that the FDA is uniquely bad or always worse than the EMA or vice-versa. The argument is that it’s wasteful to duplicate the lengthy approval process and that both agencies sometimes make mistakes. As a result, it’s simple common sense to let Americans avail themselves of drugs and devices approved in other developed countries.
Personalized Medicine and the FDA
In my post A New FDA for the Age of Personalized, Molecular Medicine I wrote:
Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good.
The Manhattan Institute has today taken out a full-page ad in the New York Times calling for a discussion about how to integrate personalized medicine with the FDA. The ad reads in part:
A new era in science and medicine calls for a new approach at the federal Food and Drug Administration, which determines whether any new treatment is safe and effective.
Every American has a stake in this change – because everyone will be a patient someday.
Congress should lay the foundation for a 21st century FDA by creating an external advisory network drawing on the expertise of the scientific and patient communities to assist the FDA in setting standards for how biomarkers can be better integrated into the drug development process.
This is a call for collaboration on an unprecedented scale to help the FDA chart a safe path for advancing biomarkers from discovery in a lab to your doctor’s office. We echo previous recommendations made by the President’s Council of Advisors on Science and Technology, the National Institutes of Health, a report from the National Research Council – and senior staff at the FDA itself.
The ad is signed by former FDA commissioner Andrew von Eschenbach, Peter Huber, (whose excellent book The Cure in the Code lays out the science and policy of biomarkers), Eric Topol, and myself among others.
See Project FDA for more.
Is the FDA Too Conservative or Too Aggressive?
I have long argued that the FDA has an incentive to delay the introduction of new drugs because approving a bad drug (Type I error) has more severe consequences for the FDA than does failing to approve a good drug (Type II error). In the former case at least some victims are identifiable and the New York Times writes stories about them and how they died because the FDA failed. In the latter case, when the FDA fails to approve a good drug, people die but the bodies are buried in an invisible graveyard.
In an excellent new paper (SSRN also here) Vahid Montazerhodjat and Andrew Lo use a Bayesian analysis to model the optimal tradeoff in clinical trials between sample size, Type I and Type II error. Failing to approve a good drug is more costly, for example, the more severe the disease. Thus, for a very serious disease, we might be willing to accept a greater Type I error in return for a lower Type II error. The number of people with the disease also matters. Holding severity constant, for example, the more people with the disease the more you want to increase sample size to reduce Type I error. All of these variables interact.
In an innovation the authors use the U.S. Burden of Disease Study to find the number of deaths and the disability severity caused by each major disease. Using this data they estimate the costs of failing to approve a good drug. Similarly, using data on the costs of adverse medical treatment they estimate the cost of approving a bad drug.
Putting all this together the authors find that the FDA is often dramatically too conservative:
…we show that the current standards of drug-approval are weighted more on avoiding a Type I error (approving ineffective therapies) rather than a Type II error (rejecting effective therapies). For example, the standard Type I error of 2.5% is too conservative for clinical trials of therapies for pancreatic cancer—a disease with a 5-year survival rate of 1% for stage IV patients (American Cancer Society estimate, last updated 3 February 2013). The BDA-optimal size for these clinical trials is 27.9%, reflecting the fact that, for these desperate patients, the cost of trying an ineffective drug is considerably less than the cost of not trying an effective one.
(The authors also find that the FDA is occasionally a little too aggressive but these errors are much smaller, for example, the authors find that for prostate cancer therapies the optimal significance level is 1.2% compared to a standard rule of 2.5%.)
The result is important especially because in a number of respects, Montazerhodjat and Lo underestimate the costs of FDA conservatism. Most importantly, the authors are optimizing at the clinical trial stage assuming that the supply of drugs available to be tested is fixed. Larger trials, however, are more expensive and the greater the expense of FDA trials the fewer new drugs will be developed. Thus, a conservative FDA reduces the flow of new drugs to be tested. In a sense, failing to approve a good drug has two costs, the opportunity cost of lives that could have been saved and the cost of reducing the incentive to invest in R&D. In contrast, approving a bad drug while still an error at least has the advantage of helping to incentivize R&D (similarly, a subsidy to R&D incentivizes R&D in a sense mostly by covering the costs of failed ventures).
The Montazerhodjat and Lo framework is also static, there is one test and then the story ends. In reality, drug approval has an interesting asymmetric dynamic. When a drug is approved for sale, testing doesn’t stop but moves into another stage, a combination of observational testing and sometimes more RCTs–this, after all, is how adverse events are discovered. Thus, Type I errors are corrected. On the other hand, for a drug that isn’t approved the story does end. With rare exceptions, Type II errors are never corrected. The Montazerhodjat and Lo framework could be interpreted as the reduced form of this dynamic process but it’s better to think about the dynamism explicitly because it suggests that approval can come in a range–for example, approval with a black label warning, approval with evidence grading and so forth. As these procedures tend to reduce the costs of Type I error they tend to increase the costs of FDA conservatism.
Montazerhodjat and Lo also don’t examine the implications of heterogeneity of preferences or of disease morbidity and mortality. Some people, for example, are severely disabled by diseases that on average aren’t very severe–the optimal tradeoff for these patients will be different than for the average patient. One size doesn’t fit all. In the standard framework it’s tough luck for these patients. But if the non-FDA reviewing apparatus (patients/physicians/hospitals/HMOs/USP/Consumer Reports and so forth) works relatively well, and this is debatable but my work on off-label prescribing suggests that it does, this weighs heavily in favor of relatively large samples but low thresholds for approval. What the FDA is really providing is information and we don’t need product bans to convey information. Thus, heterogeneity plus a reasonable effective post-testing choice process, mediates in favor of a Consumer Reports model for the FDA.
The bottom line, however, is that even without taking into account these further points, Montazerhodjat and Lo find that the FDA is far too conservative especially for severe diseases. FDA regulations may appear to be creating safe and effective drugs but they are also creating a deadly caution.
Hat tip: David Balan.
FDA approval at what price?
There is plenty of debate over whether the FDA should be looser or tougher with new drug approval, but I rarely hear the question posed as “approval at what price?”
One option would be to approve relatively strong and safe drugs at full Medicare and Medicaid reimbursement rates, if not higher. Drugs with lesser efficacy or higher risk could be approved at lower reimbursement prices. It is possible or perhaps even likely, of course, that private insurance companies would follow the government’s lead.
Dr. Peter Bach has promoted one version of this idea, and produced a calculator for valuing these drugs. In essence the government would be saying to lower quality producers “yes, you can continue to try to improve this drug, but not at public expense.”
I believe proposals of this kind deserve further attention, and in general the notion of regulatory approval need not be conceived in strictly binary, yes/no terms.
FDA Loses Another Free Speech Case
WSJ: A federal court in New York delivered a setback to the Food and Drug Administration, ruling the agency can’t bar a drug company from marketing a pill for off-label use as long as the claims are truthful.
The decision by the federal district court in the Southern District of New York, is the latest of a line of such cases. It concerns the Irish company Amarin Pharma Inc. and its fish-oil-derived drug Vascepa, and it has been closely watched by the pharmaceutical industry. The company asked the court to stop the FDA from enforcing its off-label marketing ban, and the court agreed.
The ruling is important because in the last few years the FDA has extracted billions of dollars in settlements from pharmaceutical firms for engaging in what appears to be constitutionally protected speech. In fact, the courts have repeatedly ruled that FDA and Congressional restrictions on truthful and non-misleading off-label marketing are unconstitutional.
In Washington Legal Foundation v. Friedman, for example, the DC court issued an injunction preventing the FDA from prohibiting, restricting, sanctioning or otherwise seeking to limit pharmaceutical and device manufactures from disseminating information about off-label uses from peer-reviewed professional journals or textbooks. In U.S. v. Caronia the court (2nd circuit) reversed a criminal conviction and said that the FDA cannot criminalize truthful promotion of off-label uses of approved drugs. Indeed, the court in that case defended the utility of such promotion:
…prohibiting off-label promotion by a pharmaceutical manufacturer while simultaneously allowing off-label use “paternalistically” interferes with the ability of physicians and patients to receive potentially relevant treatment information; such barriers to information about off-label use could inhibit, to the public’s detriment, informed and intelligent treatment decisions. See Va. Bd. of Pharmacy v. Va. Citizens Consumer Council, Inc., 425 U.S. 748, 770 (1976)
…See also Sorrell, 131 S. Ct. at 2670- 72 (“[The] fear that [physicians, sophisticated and experienced customers,] would make bad decisions if given truthful information” cannot justify content-based burdens on speech.”) (citing sources);
…Liquormart, 517 U.S. at 503 (“[B]ans against truthful, nonmisleading commercial speech . . . usually rest solely on the offensive assumption that the public will respond ‘irrationally’ to the truth. . . . The First Amendment directs us to be especially skeptical of regulations that seek to keep people in the dark for what the government perceives to be their own good.”).
In Washington Legal Foundation v. Henney the court summed up concisely:
The First Amendment is premised upon the idea that people do not need the government’s permission to engage in truthful, nonmisleading speech about lawful activity.
(By the way, it’s this line of cases that makes me think that 23andMe has a strong first amendment case for presenting to customers information about their own DNA.)
The courts were exactly correct. Off-label uses of approved drugs are a vital part of the discovery process of modern medicine. New uses for old drugs are often discovered through serendipity and close observation in the field. Indeed, modern medicine moves faster than the FDA and it often happens that the first-line therapy is an off-label treatment. Prohibiting firms from truthfully discussing such treatments with physicians is not just unconstitutional it’s also paternalistic and harmful to patient welfare.
This case, Amarin v FDA, is especially egregious because the company wants to discuss with physicians the results of its own FDA-approved trial. Amarin has a fish-oil derived drug designed to reduce triglyceride levels and it already has approval to sell and market this drug in patients with very high levels of triglycerides. It also wanted approval to sell the drugs in patients with high (but not very high levels) and it conducted an FDA-approved trial that showed that the drug is safe and effective at reducing triglyceride levels in this set of patients.
Although the trial was successful the FDA, for reasons discussed below, refused to grant approval. Amarin isn’t disputing the refusal but they wanted to tell physicians the results of the trial and then let the physicians and their patients decide whether reducing triglyceride levels is something that they want to do given currently existing evidence about triglyceride levels and heart attacks. The FDA threatened to pursue civil and possibly criminal charges but the court has now precluded the FDA from those pursuits.
Aside from the first amendment issues, the case is also interesting as another example of how a capricious FDA can kill innovation through regulation uncertainty. (The story is similar in many respects to that told by Joseph Gulfo in Innovation Breakdown, see my review).
To wit: Amarin wanted approval to sell its drug to patients with high levels of triglycerides and they obtained a special protocol agreement (SPA) from the FDA to run a study in this population. Quoting the court:
An SPA agreement is a written agreement that a manufacturer may enter into with the FDA, which sets out the design and size parameters for clinical trials of a new drug, and the conditions under which the FDA would approve the drug. For the manufacturer, such an agreement minimizes development risk by providing regulatory predictability: Provided that the manufacturer follows the procedure set in the SPA agreement and the drug proves meets the benchmarks for effectiveness set in the agreement, the FDA must approve the drug.
The results of the study were good:
The ANCHOR study achieved each numeric objective that the SPA Agreement had set: The results showed that Vascepa produced a statistically significant decrease in triglyceride levels in persons with persistently high triglycerides, as well as in other lipid, lipoprotein, and inflammatory biomarkers.
…Because Amarin had met all requirements for approval set out in the ANCHOR SPA Agreement, Amarin anticipated that the FDA would approve Vascepa for the additional use that Amarin sought, i.e., by patients with persistently high triglycerides.
Instead of approving the drug, however, the FDA rescinded their agreement. The FDA argued that although the drug did reduce triglyceride levels it was no longer certain that reducing triglyceride levels would reduce cardiovascular events.
Can you imagine the tailspin this sent researchers at Amarin into when they learned that the drug would not be approved despite passing all the agreed upon tests? (Read Gulfo for a vivid account of his case).
Who will invest in bio-medical advances with this kind of risk? Sergei Brin said that he didn’t want to invest in health care because “It’s just a painful business to be in . . . the regulatory burden in the U.S. is so high that I think it would dissuade a lot of entrepreneurs.” It’s precisely this kind of regulatory uncertainty that an SPA was meant to avoid. By rescinding their agreement, the FDA is sending the message to investors that no one is safe.
Fast Tracking the FDA
Bart Madden and James Pinkerton suggest a new “free to choose” track for pharmaceuticals. Pharmaceuticals which showed initial effectiveness would be available for early sale but all treatment information under the early-sale program would have to be reported to an open-access database.
After a drug successfully passes safety trials and shows initial effectiveness in clinical trials—that is, the early steps—a drug developer could request that their drug be available for sale on a “free to choose” track (the developer could elect also to continue on the FDA clinical trial track). As a result, patients such as Matt Bellina would be able to access innovative new drugs up to seven years earlier than waiting for a final FDA decision. For patients given only a few years—or months—to live, seven years sooner could spell life, not death.
Under our proposal, a patient’s doctor would be required to submit treatment results and medical information such as a patient’s genetic data to the open-access database. Doctors and patients would get real-time updates about the benefits and side effects of any “free to choose” drug and be able to make informed decisions about an early use of these new drugs versus approved drugs.
We might bear in mind that clinical trials involve patients who are mostly similar. On the other hand, because the “free to choose” option would be available to everyone, new insights would be obtained about how a drug performs for a far broader range of patients. These insights would better inform the biopharmaceutical industry, leading, in turn, to better allocation of research funds and faster innovation.
Bart’s excellent book Free to Choose Medicine has more on the proposal, which I think would speed drugs to patients and increase pharmaceutical research and development. Do note that I hold the Bartley J. Madden chair in economics at Mercatus at GMU and I have my biases.
FDA Device Regulation
In the interests of length I had to sacrifice a few points in my WSJ review of Innovation Breakdown by Joseph Gulfo (excerpted on MR yesterday). In the review, I argued that the FDA could speed the approval of medical devices and reduce uncertainty by not reviewing directly but becoming a certifier of certifiers as is done in Europe.
In fact, a US model is already in place. OSHA, the Occupational Safety and Health, requires that a range of electrical products and materials meet certain safety standards but it outsources certification to Underwriters Laboratories and other Nationally Recognized Testing Laboratories. We could and should do the same for medical devices and for drugs. Indeed, if a device or drug is permitted in a developed, advanced economy such as in Europe, Australia and Japan then I see no reason why it ought not to be provisionally approved in the United States (and vice-versa).
My paper with DiMasi and Milne showed that some FDA drug divisions appear to be much more productive than other divisions suggesting possibilities for substantial improvements if best practices were uniformly adopted. There also appear to be substantial differences between the regulation of drugs and devices especially in recent years. Ian Hathaway and Robert Litan have a new paper on Entrepreneurship and Job Creation in the U.S. Life Sciences Sector that shows that new firm creation in the medical device sector has fallen drastically since 1990 and far more than in the drug sector. Although there are likely many causes, the drop in the number of new firms is consistent with Gulfo’s experience of regulatory uncertainty and may suggest increases in regulatory cost for devices relative to drugs. Here is Hathaway and Litan:
The medical devices and equipment sector, on the other hand, saw new firm formations decline steadily and persistently between 1990 and 2011—falling by 695 firms or 53 percent during that period. Its share of new life sciences firms fell to 31 percent in 2011 from 50 percent in 1990. Unlike its life sciences sector counterparts, the decline in new firm formations in this segment appears to stretch beyond the cyclical effects of the Great Recession.

Ebola and the FDA
The Telegraph reports:
The two American doctors who have caught Ebola have been treated with a new “secret serum” which could potentially save their lives.
…A source close to the Atlanta hospital, where Dr Brantly is being treated, told CNN: “Within an hour of receiving the medication, Brantly’s condition was nearly reversed. His breathing improved; the rash over his trunk faded away.”
One of his doctors reportedly described the events as “miraculous.”
…Dr Writebol was also administrated with the drug, which was transported to Liberia in a special sub-zero container. She showed a less remarkable recovery, but is hoped to travel to the US on Tuesday to continue her treatment.
According to CNN, the drug was developed by the biotech firm Mapp Biopharmaceutical, based in California. The patients were told that this treatment had never been tried before in a human being but had shown promise in small experiments with monkeys.
…health workers said drugs that could fight Ebola are not particularly complicated but pharmaceutical firms see no economic reason to invest in making them because the virus’ few victims are poor Africans.
Of course, pharmaceutical firms are not going to invest millions in getting a drug through FDA trials for a disease that has only killed a few thousand people since being discovered in 1976. Nevertheless, some people find this simple logic difficult to accept.
Prof John Ashton, Britain’s leading public health doctor, termed the “moral bankruptcy” of profit-driven drugs developers.
The logic of profit-driven drug developers is no different than the logic of profit driving automobile manufactures. It isn’t profitable to make cars for people who can’t afford them but the auto firms are rarely called morally bankrupt for not giving cars away to the poor. Moreover, it’s not at all obvious why the burden of producing unprofitable drugs should fall on the drug manufacturers. To the extent that there is an ethical case for developing drugs for the poor it’s a burden that falls on all of us.
As Eric Crampton notes there are at least two possible solutions. Either ensure at taxpayer expense a return on investment by subsidizing, offering prizes (as I suggested in Launching) or publicly investing in orphan drugs or
…ease up the FDA trials for drugs in this kind of category. Does it really make sense to mandate placebo trials for drugs hitting diseases with 60% fatality rates? We are condemning people to a very high risk of death for the sake of ensuring that there aren’t drug side effects and that the drugs are more effective than placebos (pretty easy to tell quickly where the fatality rate is otherwise 60%!).
Rating the FDA by Division: Comparison with EMA
Reporters have been asking the FDA about my paper with DiMasi and Milne, AN FDA REPORT CARD. As you may recall, the upshot of that paper is that there is wide variance in the performance of FDA divisions. Here, for example, is the mean time to approval across divisions. 
Our simple index, discussed in the paper, suggests that these differences are not easily explained by factors such as resources, complexity of task or differences in safety tradeoffs across divisions. In responding to our paper, however, the FDA has said that similar differences in time to approval by drug type are seen at other drug approval agencies. If true, that would be an important criticism.
Fortuitously, some relevant data crossed our desk recently. The Center for Innovation in Regulatory Science (CIRS), a UK based research consortium, compared median review times at the FDA to the next most important drug regulatory agency in the world, the European Medicines Agency (they also look at the Japanese agency). To their credit, the FDA is faster on average than the EMA (thanks PDUFA!). What is relevant for our purposes, however, is to compare differences across divisions.
The CIRS breaks drugs into broader classes than we used but the story they tell for the FDA is similar to ours; anti-cancer drugs, for example, are approved much more quickly than neurology drugs. The story for the EMA, however, is very different than for the FDA. For the EMA all types of drugs are approved in roughly the same amount of time.

We have argued that the wide variance in performance at the FDA is suggestive of differences in productivity. The fact that we do not see the same wide variance in performance at the EMA is supportive of our argument. Our goal and conclusion still stand:
We support further study to identify the policies and procedures that are working in high-performing divisions, with the goal of finding ways to apply them in low-performing divisions, thereby improving review speed and efficiency.
Rating the FDA by Division
In previous work, I have argued that asymmetric incentives make the FDA too risk averse with the result being excessive drug lag and drug loss. The FDA, however, is not a monolithic agency, it is divided into divisions which oversee different types of drugs. The divisions have different cultures, expectations histories and understandings. In my latest paper, written with Tufts researchers Joe DiMasi and Chris Milne, we put aside the question of global efficiency and ask a different question. How do the FDA divisions rate against one another? What we find is quite surprising: some of the FDA divisions appear to be much more productive than others. From the abstract:
After reviewing nearly 200 products accounting for 80 percent of new drug and biologic launches from 2004 to 2012, the authors find wide variation in division performance. In fact, the most productive divisions (Oncology and Antivirals) approve new drugs roughly twice as fast as the CDER average and three times faster than the least efficient divisions—without the benefit of greater resources, reduced complexity of task, or reduction in safety. The authors estimate that a modest narrowing of the CDER divisional productivity gap would reduce drug costs by nearly $900 million annually. The worth to patients, however, would be far greater if the agency could accelerate access to an additional generation of (about 25) drugs. Greater agency efficiency would be worth about $4 trillion in value to patients, from enhanced U.S. life expectancy. To reap such gains, this study encourages Congress and the FDA to more closely evaluate the agency’s most efficient drug review divisions, and apply the lessons learned across CDER. We also propose a number of reforms that the FDA and Congress should consider to improve efficiency, transparency, and consistency at the divisional level.
Andrew von Eschenbach a former Commissioner of the FDA and Director of the National Cancer Institute and now chairman of the Manhattan Institute’s Project FDA wrote a foreword to our paper. Eschenbach writes:
The authors of this report have taken a giant step…by assembling and analyzing a wide array of publicly available information about the relative performance of individual CDER divisions….Continuous, quality improvement measures routinely used by private industry could serve FDA leadership, sponsors, and patients by discerning factors that contribute to an optimal level of performance and, more important, disseminating such practices to ensure that all divisions achieve that performance. The payoff for such an effort could be enormous.
…Process improvement should not be a controversial proposal. An organization like the FDA—which is over a century old and which has maintained its current, basic organizational framework for decades—requires new tools to adapt to changing circumstances.
…I have enjoyed no greater privilege in my professional career than serving alongside the FDA’s talented staff. Today, the agency has more potential than ever to help the U.S. lead the world in advancing a biomedical revolution, one that will have an impact on every aspect of America’s economy and health-care system by improving health, increasing productivity, and reducing overall health-care costs.
…this report should be viewed as a positive, constructive contribution to a desperately needed dialogue on how to assist the agency in fulfilling this vital national goal.
Still Burned by the FDA
Excellent piece in the Washington Post on the FDA and sunscreen:
…American beachgoers will have to make do with sunscreens that dermatologists and cancer-research groups say are less effective and have changed little over the past decade.
That’s because applications for the newer sunscreen ingredients have languished for years in the bureaucracy of the Food and Drug Administration, which must approve the products before they reach consumers.
…The agency has not expanded its list of approved sunscreen ingredients since 1999. Eight ingredient applications are pending, some dating to 2003. Many of the ingredients are designed to provide broader protection from certain types of UV rays and were approved years ago in Europe, Asia, South America and elsewhere.
If you want to understand how dysfunctional regulation has become ponder this sentence:
“This is a very intractable problem. I think, if possible, we are more frustrated than the manufacturers and you all are about this situation,”
Who said it? Janet Woodcock, director of the FDA’s Center for Drug Evaluation and Research! Or how about this:
Eleven months ago, in a hearing on Capitol Hill, FDA Commissioner Margaret A. Hamburg told lawmakers that sorting out the sunscreen issue was “one of the highest priorities.”
If this is high priority what happens to all the “low priority” drugs and medical devices?
The whole piece in the Washington Post is very good, read it all. I first wrote about this issue last year.
Addendum: See FDAReview.org for more on the FDA regulatory process and its reform.
The other hand of the FDA
Via Chaim Katz, here is a Bloomberg headline from 2012: “Asian Seafood Raised on Pig Feces Approved for U.S. Consumers.” Whether or not you agree with this decision (how good is disclosure?), you get the point.
A New FDA for the Age of Personalized, Molecular Medicine
In a brilliant new paper (pdf) (html) Peter Huber draws upon molecular biology, network analysis and Bayesian statistics to make some very important recommendations about FDA policy. Consider the following drugs (my list):
Drug A helps half of those to whom it is prescribed but it causes very serious liver damage in the other half. Drug B works well at some times but when administered at other times it accelerates the disease. Drug C fails to show any effect when tested against a placebo but it does seem to work in practice when administered as part of a treatment regime.
Which of these drugs should be approved and which rejected? The answer is that all of them should be approved; that is, all of them should be approved if we can target each drug to the right patient at the right time and with the right combination of other drugs. Huber argues that Bayesian adaptive testing, with molecular biology and network analysis providing priors, can determine which patients should get which drugs when and in what combinations. But we can only develop the data to target drugs if the drugs are actually approved and available in the field. The current FDA testing regime, however, is not built for adaptive testing in the field.
The current regime was built during a time of pervasive ignorance when the best we could do was throw a drug and a placebo against a randomized population and then count noses. Randomized controlled trials are critical, of course, but in a world of limited resources they fail when confronted by the curse of dimensionality. Patients are heterogeneous and so are diseases. Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good:
Given what we now know about the biochemical complexity and diversity of the environments in which drugs operate, the unresolved question at the end of many failed clinical trials is whether it was the drug that failed or the FDA-approved script. It’s all too easy for a bad script to make a good drug look awful. The disease, as clinically defined, is, in fact, a cluster of many distinct diseases: a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or the good drug or cocktail fails because the disease’s biochemistry changes quickly but at different rates in different patients, and to remain effective, treatments have to be changed in tandem; but the clinical trial is set to continue for some fixed period that doesn’t align with the dynamics of the disease in enough patients
Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority. Some cocktail cures that we need may well be composed of drugs that can’t deliver any useful clinical effects until combined in complex ways. Getting that kind of medicine through today’s FDA would be, for all practical purposes, impossible.
The alternative to the FDA process is large collections of data on patient biomarkers, diseases and symptoms all evaluated on the fly by Bayesian engines that improve over time as more data is gathered. The problem is that the FDA is still locked in an old mindset when it refuses to permit any drugs that are not “safe and effective” despite the fact that these terms can only be defined for a large population by doing violence to heterogeneity. Safe and effective, moreover, makes sense only when physicians are assumed to be following simple, A to B, drug to disease, prescribing rules and not when they are targeting treatments based on deep, contextual knowledge that is continually evolving:
In a world with molecular medicine and mass heterogeneity the FDA’s role will change from the yes-no single rule that fits no one to being a certifier of biochemical pathways:
By allowing broader use of the drug by unblinded doctors, accelerated approval based on molecular or modest—and perhaps only temporary—clinical benefits launches the process that allows more doctors to work out the rest of the biomarker science and spurs the development of additional drugs. The FDA’s focus shifts from licensing drugs, one by one, to regulating a process that develops the integrated drug-patient science to arrive at complex, often multidrug, prescription protocols that can beat biochemically complex diseases.
…As others take charge of judging when it is in a patient’s best interest to start tinkering with his own molecular chemistry, the FDA will be left with a narrower task—one much more firmly grounded in solid science. So far as efficacy is concerned, the FDA will verify the drug’s ability to perform a specific biochemical task in various precisely defined molecular environments. It will evaluate drugs not as cures but as potential tools to be picked off the shelf and used carefully but flexibly, down at the molecular level, where the surgeon’s scalpels and sutures can’t reach.
In an important section, Huber notes that some of the biggest successes of the drug system in recent years occurred precisely because the standard FDA system was implicitly bypassed by orphan drug approval, accelerated approval and off-label prescribing (see also The Anomaly of Off-Label Prescribing).
But for these three major licensing loopholes, millions of people alive today would have died in the 1990s. Almost all the early HIV- and AIDS-related drugs—thalidomide among them—were designated as orphans. Most were rushed through the FDA under the accelerated-approval rule. Many were widely prescribed off-label. Oncology is the other field in which the orphanage, accelerated approval, and off-label prescription have already played a large role. Between 1992 and 2010, the rule accelerated patient access to 35 cancer drugs used in 47 new treatments. For the 26 that had completed conventional followup trials by the end of that period, the median acceleration time was almost four years.
Together, HIV and some cancers have also gone on to demonstrate what must replace the binary, yes/ no licensing calls and the preposterously out-of-date Washington-approved label in the realm of complex molecular medicine.
Huber’s paper has a foreword by Andrew C. von Eschenbach, former commissioner of the FDA, who concludes:
For precision medicine to flourish, Congress must explicitly empower the agency to embrace new tools, delegate other authorities to the NIH and/or patient-led organizations, and create a legal framework that protects companies from lawsuits to encourage the intensive data mining that will be required to evaluate medicines effectively in the postmarket setting. Last but not least, Congress will also have to create a mechanism for holding the agency accountable for producing the desired outcomes.