You Have Been Warned
New paper in Science, A single mutation in bovine influenza H5N1 hemagglutinin switches specificity to human receptors. If that isn’t clear enough, here is the editor’s summary:
In 2021, a highly pathogenic influenza H5N1 clade 2.3.4.4b virus was detected in North America that is capable of infecting a diversity of avian species, marine mammals, and humans. In 2024, clade 2.3.4.4b virus spread widely in dairy cattle in the US, causing a few mild human cases, but retaining specificity for avian receptors. Historically, this virus has caused up to 30% fatality in humans, so Lin et al. performed a genetic and structural analysis of the mutations necessary to fully switch host receptor recognition. A single glutamic acid to leucine mutation at residue 226 of the virus hemagglutinin was sufficient to enact the change from avian to human specificity. In nature, the occurrence of this single mutation could be an indicator of human pandemic risk. —Caroline Ash
Time to stock up on Tamiflu and Xofluza.
Addendum: See also A Bird Flu Pandemic Would Be One of the Most Foreseeable Catastrophes in History
Info Finance has a Future!
Info finance is Vitalik Buterin’s term for combining things like prediction markets and news. Indeed, a prediction market like Polymarket is “a betting site for the participants and a news site for everyone else.”
Here’s an incredible instantiation of the idea from Packy McCormick. As I understand it, betting odds are drawn from Polymarket, context is provided by Perplexity and Grok, a script is written by ChatGPT and read by an AI using Packy’s voice and a video is produced by combining with some simple visuals. All automated.
What’s really impressive, however, is that it works. I learned something from the final product. I can see reading this like a newspaper.
Info finance has a future!
Addendum: See also my in-depth a16z crypto podcast (Apple, Spotify) talking with Kominers and Chokshi for more.
Marginal Revolution Podcast–The New Monetary Economics!
Today on the MR Podcast Tyler and I discuss the “New Monetary Economics”. Here’s the opening
TABARROK: Today we’re going to be talking about the new monetary economics. Now, perhaps the first thing to say is that it’s not new anymore. The new monetary economics refers to a set of claims and ideas about monetary economics from the 1980s, more or less, coming from people mostly in finance, like Fischer Black and Eugene Fama, and making some very bold claims that macroeconomics had gotten some things completely wrong. You and Randall Kroszner also wrote a great book, Explorations in the New Monetary Economics, and that appeared in 1994. [Someone should reprint this book!, AT]
Now, most people thought that the ideas of the new monetary economics were simply crazy. Black and Fama, for example, they argued that the Fed was essentially impotent; that it couldn’t control the money supply or even the price level, let alone the economy, at least in some circumstances.
COWEN: Fischer Black started the new monetary economics with a 1970 article, very early. Not in a standard journal, of course. Black argued that the Fed doesn’t matter. The supply of money and the price level were not closely related in any obvious way. There’s a well-known story where Fischer Black showed up at Chicago to present a paper at Milton Friedman’s monetary seminar. Friedman started off by introducing Black as, “Fischer Black’s paper is totally wrong. He’s going to present it to us. We have two hours to figure out why.”
At the same time, people like Paul Samuelson, Robert Solow, the MIT crowd, they also just said Black is totally wrong. He’s a genius on finance and options pricing, but when it comes to monetary economics, just forget about it. Dismiss him. They even said this in print at times.
The ideas of the NME remain as counterintuitive as ever–is it really possible that the Fed has no power over inflation let alone the real economy??? Yet the ideas seem increasingly relevant to modern, sophisticated, highly liquid financial markets and monetary systems including crypto. If anything, the NME has become harder to dismiss, as the world theorized by its proponents in the 1970s and 1980s now mirrors today’s reality far more than their own. While the NME may be now be old, the ideas remain as challenging and even as inspiring as ever.
I am not sure that either Tyler or I have a solid conclusion on the NME but we invite you to join us on this exploration.
Subscribe now to take a small step toward a much better world: Apple Podcasts | Spotify | YouTube.
Some Simple Economics of the Google Antitrust Case
The case is straightforward: Google pays firms like Apple billions of dollars to make its search engine the default. (N.B. I would rephrase this as Apple charges Google billions of dollars to make its search engine the default–a phrasing which matters if you want to understand what is really going on. But set that aside for now.) Consumers, however, can easily switch to other search engines by downloading a different browser or changing their default settings. Why don’t they? Because the minor transaction costs are not worth the effort. Moreover, if Google provides the best search experience, most users have no incentive to switch.
Consequently, any potential harm to consumers is limited to minor switching costs, and any remedies should be proportionate. Proposals such as forcing Google to divest Chrome or Android are vastly disproportionate to the alleged harm and risk being counterproductive. Google’s Android has significantly increased competition in the smartphone market, and ChromeOS has done the same for laptops. Google has invested billions in increasing competition in its complements. Google was able to make these investments because they paid off in revenue to Google Search and Google Ads. Kill the profit center and kill the incentive to invest in competition. Unintended consequences.
I argued above that consumer harm is limited to minor switching costs. The plaintiffs’ counter-argue that Google’s purchase of default-status “forecloses” competitors from achieving the scale necessary to compete effectively. This argument relies on network effects – more searches improve search quality through better data. However, this creates a paradox: on the theory of the plaintiffs, two or three firms each operating at smaller scale is worse than one operating at large scale. For the plaintiffs’ argument to hold, it must be shown that we are at the exact sweet spot where the benefits of increased competition in lowering the price of advertising outweighs the efficiency losses to consumer search quality from reduced scale. Yet, there is no evidence in the case—nor even an attempt—to demonstrate that we are at such a sweet spot.
Or perhaps the argument is that with competition we would get even better search but that argument can’t be right because the costs of switching, as noted above, are bounded by some minor transaction costs. Thus, if a competitor could offer better search, it would easily gain scale (e.g. AI search, see below).
Traditional foreclosure analysis requires showing both substantial market closure and consumer harm. Given the ease of switching and the complex relationship between scale and search quality, proving such harm becomes challenging and my read is that the plaintiffs didn’t prove harm.
In my view, the best analogy to the Google antitrust case is Coke and Pepsi battling for shelf space in the supermarket. Returning to my earlier parenthetical point, Apple is like the supermarket charging for prime shelf placement. Is this a significant concern? Not really. Eye-level placement matters to Coke and Pepsi but by construction they are competing for consumers who don’t much care which sugary, carbonated beverage they consume. For consumers who do care, the inconvenience is limited to reaching to a different shelf.
To add insult to injury, the antitrust case is happening when Google is losing advertising share and is under pressure from a new search technology, Artificial Intelligence. AI search from OpenAI, Anthropic, Meta Llama, and xAI is very well funded and making rapid progress. Somehow AI search did manage to achieve scale! As usual, the market appears more effective than the antitrust authorities at creating competition.
Addendum: Admittedly this is outside the remit of the judge, but the biggest anti-competitive activities in tech are probably government policies that slow the construction of new power generation, new power lines, new data centers, and deals between power generators and data centers. I’d prefer the government take on its own anticompetitive effects before going after extremely succesful tech companies that have clearly made consumers much better off.
A Bird Flu Pandemic Would Be One of the Most Foreseeable Catastrophes in History
Zeynep Tufekci writing in the NYTimes hits the nail on the head:
The H5N1 avian flu, having mutated its way across species, is raging out of control among the nation’s cattle, infecting roughly a third of the dairy herds in California alone. Farmworkers have so far avoided tragedy, as the virus has not yet acquired the genetic tools to spread among humans. But seasonal flu will vastly increase the chances of that outcome. As the colder weather drives us all indoors to our poorly ventilated houses and workplaces, we will be undertaking an extraordinary gamble that the nation is in no way prepared for.
All that would be more than bad enough, but we face these threats gravely hobbled by the Biden administration’s failure — one might even say refusal — to respond adequately to this disease or to prepare us for viral outbreaks that may follow.
…Devastating influenza pandemics arise throughout the ages because the virus is always looking for a way in, shape shifting to jump among species in ever novel forms. Flu viruses have a special trick: If two different types infect the same host — a farmworker with regular flu who also gets H5N1 from a cow — they can swap whole segments of their RNA, potentially creating an entirely new and deadly virus that has the ability to spread among humans. It’s likely that the 1918 influenza pandemic, for example, started as a flu virus of avian origin that passed through a pig in eastern Kansas. From there it likely infected its first human victim before circling the globe on a deadly journey that killed more people than World War I.
And that’s why it’s such a tragedy that the Biden administration didn’t — or couldn’t — do everything necessary to snuff out the U.S. dairy cattle infection when the outbreak was smaller and easier to address.
Will there be a large outbreak among humans? Probably not. But a 9% probabability of a bad event warrants more than a shrug. Bad doesn’t have to be on the scale of COVID-bad to warrant precaution. The 2009 H1N1 flu pandemic, while relatively mild, infected about 61 million people in the U.S., leading to 274,000 hospitalizations, 12,400 deaths, and billions of dollars in economic costs.
H5N1 will likely pass us over—but only the weak rely on luck. Strong civilizations don’t pray for mercy from microbes; they crush them. Each new outbreak should leave us not relieved, but better armed, better trained and better prepared for nature’s next assault.
Literacy Rates and Simpson’s Paradox
Max T. at Maximum Progress shows that between 1992 and 2003 US literacy rates fell dramatically within every single educational category but the aggregate literacy rate didn’t budge. A great example of Simpson’s Paradox! The easiest way to see how this is possible is just to imagine that no one’s literacy level changes but everyone moves up an educational category. The result is zero increase in literacy but falling literacy rates in each category.
Two interesting things follow. First, this is very suggestive of credentialing and the signaling theory of education. Second, and more originally, Max suggests that total factor productivity is likely to have been mismeasured. Total factor productivity tells us how much more output can we get from the same inputs. If inputs increase, we expect output to increase so to measure TFP we must subtract any increase in output due to greater inputs. It’s common practice, however, to use educational attainment as a (partial) measure of skill or labor quality. If educational attainment is just rising credentialism, however, then this overestimates the increase in output due to labor skill and underestimates the gain to TFP.
This does not imply that we are richer than we actually are–output is what it is–but it does imply that if we want to know why we haven’t grown richer as quickly as we did in the past we should direct less attention to ideas and TFP and more attention to the failure to truly increase human capital.
Thanksgiving and the Lessons of Political Economy
It’s been a while so time to re-up my 2004 post on thanksgiving and the lessons of political economy. Here it is with no indent:
It’s one of the ironies of American history that when the Pilgrims first arrived at Plymouth rock they promptly set about creating a communist society. Of course, they were soon starving to death.
Fortunately, “after much debate of things,” Governor William Bradford ended corn collectivism, decreeing that each family should keep the corn that it produced. In one of the most insightful statements of political economy ever penned, Bradford described the results of the new and old systems.
[Ending corn collectivism] had very good success, for it made all hands very industrious, so as much more corn was planted than otherwise would have been by any means the Governor or any other could use, and saved him a great deal of trouble, and gave far better content. The women now went willingly into the field, and took their little ones with them to set corn; which before would allege weakness and inability; whom to have compelled would have been thought great tyranny and oppression.
The experience that was had in this common course and condition, tried sundry years and that amongst godly and sober men, may well evince the vanity of that conceit of Plato’s and other ancients applauded by some of later times; that the taking away of property and bringing in community into a commonwealth would make them happy and flourishing; as if they were wiser than God. For this community (so far as it was) was found to breed much confusion and discontent and retard much employment that would have been to their benefit and comfort. For the young men, that were most able and fit for labour and service, did repine that they should spend their time and strength to work for other men’s wives and children without any recompense. The strong, or man of parts, had no more in division of victuals and clothes than he that was weak and not able to do a quarter the other could; this was thought injustice. The aged and graver men to be ranked and equalized in labours and victuals, clothes, etc., with the meaner and younger sort, thought it some indignity and disrespect unto them. And for men’s wives to be commanded to do service for other men, as dressing their meat, washing their clothes, etc., they deemed it a kind of slavery, neither could many husbands well brook it. Upon the point all being to have alike, and all to do alike, they thought themselves in the like condition, and one as good as another; and so, if it did not cut off those relations that God hath set amongst men, yet it did at least much diminish and take off the mutual respects that should be preserved amongst them. And would have been worse if they had been men of another condition. Let none object this is men’s corruption, and nothing to the course itself. I answer, seeing all men have this corruption in them, God in His wisdom saw another course fitter for them.
Among Bradford’s many insights it’s amazing that he saw so clearly how collectivism failed not only as an economic system but that even among godly men “it did at least much diminish and take off the mutual respects that should be preserved amongst them.” And it shocks me to my core when he writes that to make the collectivist system work would have required “great tyranny and oppression.” Can you imagine how much pain the twentieth century could have avoided if Bradford’s insights been more widely recognized?
Environmental “Justice” Recreates Redlining
It’s been said that the radical left often ends up duplicating the policies of the radical right, just under different names and justifications, e.g. separate but equal, scientific thinking is “white” thinking and so forth. Here’s another example from Salim Furth: the re-creation of redlining. Redlining was the practice of making it more difficult to access financial products such as mortgages by grading some neighborhoods as “hazardous” for investment. Either by design or result, redlining was often associated with minority populations.
Salim shows that Massachusetts has created a modern redlining system.
In Massachusetts, the context is that MEPA (its mini-NEPA) requires projects of a certain size to go through either a moderately-expensive or a quite-expensive process. Some types of projects automatically [require] the quite-expensive Environmental Impact Review process. The #maleg passed a 2021 “Environmental Justice” law, which defined certain people – oh euphemism treadmill! – as “Environmental Justice populations.”…So any housing (or other) project that requires a permit from a state agency and is within 1 mile of a “Environmental Justice population” now automatically triggers the expensive EIR process….How expensive? I was told it can run from $150k to $1m, and take 6 to 12 months. That’s a lot of additional delay in a state where delays are already extreme.
If there’s a, uh, silver lining here, it’s that “EJ Population” is defined so capaciously that it includes super-rich areas of Lexington (32% Asian, $206k median hh income), because all “minorities” are automatically disadvantaged. So it’s much less targeted to disinvested places than the original redlining. But the downside is that it’s *extremely well targeted* to discourage investment anywhere near transit or jobs. The non-EJ places are the sprawly exurbs. So maybe they *tried* to reinvent redlining, but all they really accomplished was reinventing subsidies for sprawl and raising housing costs along the way!
…This is a good, sobering reminder that for every 1 step forward by pro-housing advocacy, the blue states can manage 2 steps backward via wokery, proceduralism and anti-market ideas…
Regulating Sausages
In the comments on Sunstein on DOGE many people argued that regulations were mostly about safety. Well, maybe. It’s best to think about this in the context of a real example. Here is a tiny bit of the Federal Meat Inspection Act regulating sausage production:
In the preparation of sausage, one of the following methods may be used:
Method No. 1. The meat shall be ground or chopped into pieces not exceeding three fourths of an inch in diameter. A dry-curing mixture containing not less than 3 1⁄3 pounds of salt to each hundredweight of the unstuffed sausage shall be thoroughly mixed with the ground or chopped meat. After being stuffed, sausage having a diameter not exceeding 3 1⁄2 inches, measured at the time of stuffing, shall be held in a drying room not less than 20 days at a temperature not lower than 45 °F., except that in sausage of the variety known as pepperoni, if in casings not exceeding 1 3⁄8 inches in diameter measured at the time of stuffing, the period of drying may be reduced to 15 days. In no case, however, shall the sausage be released from the drying room in less than 25 days from the time the curing materials are added, except that sausage of the variety known as pepperoni, if in casings not exceeding the size specified, may be released at the expiration of 20 days from the time the curing materials are added. Sausage in casings exceeding 3 1⁄2 inches, but not exceeding 4 inches, in diameter at the time of stuffing, shall be held in a drying room not less than 35 days at a temperature not lower than 45 °F., and in no case shall the sausage be released from the drying room in less than 40 days from the time the curing materials are added to the meat.
The act goes on like this for many, many pages. All to regulate sausages. Sausage making, once an artisan’s craft, has become a compliance exercise that perhaps only corporations can realistically manage. One can certainly see that regulations of this extensiveness lock-in production methods. Woe be to the person who wants to produce a thinner, fatter or less salty sausage let alone who tries to pioneer a new method of sausage making even if it tastes better or is safer. Is such prescriptive regulation the only way to maintain the safety of our sausages? Could not tort law, insurance, and a few simple rules substitute at lower cost and without stifling innovation?
Prediction Markets Podcast
I was delighted to appear on the a16z crypto podcast (Apple, Spotify) talking with Scott Duke Kominers (Harvard) and Sonal Chokshi about prediction markets. It’s an excellent discussion. We talk about prediction markets, polling, and the recent election but also about prediction markets for replicating scientific research, futarchy, dump the CEO markets, AIs and prediction markets, the relationship of blockchains to prediction markets and going beyond prediction markets to other information aggregation mechanisms.
Sunstein on DOGE
Good advice from Cass Sunstein, who did improve government efficiency as head of OIRA:
There is a major focus these days on the topic of government efficiency, spurred by the creation of what is being called a “Department of Government Efficiency.” I have had the good fortune of being involved in simplification of government, and reduction of paperwork and regulatory burdens, in various capacities, and here are six quick and general notations.
- The Administrative Procedure Act is central to the relevant project. It needs to be mastered. It offers opportunities and obstacles. No one (not even the president) can clap and eliminate regulations. It’s important to know the differences among IFRs, TFRs, NPRMs, FRs, and RFIs. (The best of the bunch, for making rules or eliminating rules: FRs. They are final rules.)
- The Paperwork Reduction Act needs to be mastered. There is far too much out there in the way of administrative barriers and burdens. The PRA is the route for eliminating them. There’s a process there.
- The Office of Information and Regulatory Affairs is, for many purposes, the key actor here. (I headed the office from 2009-2012.) A reduce-the-regulations effort probably has to go through that Office. Its civil servants have a ton of expertise. They could generate a bunch of ideas in a short time.
- It is important to distinguish between the flow of new burdens and regulations and the stock of old ones. They need different processes. The flow is a bit easier to handle than the stock.
- The law, as enacted by Congress, leaves the executive branch with a lot of flexibility, but also imposes a lot of constraints. Some of the stock is mandatory. Some of the flow of mandatory. It is essential to get clarity on the details there.
- The courts! It’s not right to say that recent Supreme Court decisions give the executive branch a blank check here. In some ways, they impose new obstacles. Any new administration needs a full understanding of Loper Bright, the major questions doctrine, Seila Law, and much more (jargon, I know, I know).
Human Challenge Trials Aren’t Riskier than RCTs
Nature: Keller Scholl got out of quarantine 13 days ago, and he’s still not feeling 100%. The itchiness — far and away the worst symptom, he says — is mostly gone, and now the graduate student just feels exhausted. “I’m trying to get enough sleep,” he says.
Scholl’s symptoms might be uncomfortable, but they are also of his own making. That’s because he signed up to be a volunteer in the first human ‘challenge trial’ involving Zika virus, a mosquito-borne pathogen that can cause fever, pain and, in some cases, a brain-development problem in infants. In standard infectious-disease trials, researchers test drugs or vaccines on people who already have, or might catch, a disease. But in challenge trials, healthy people agree to become infected with a pathogen so that scientists can gather preliminary data on possible drugs and vaccines before bigger trials take place. “Accelerating a Zika vaccine by a month, a few days, that does a lot of good in the world,” says Scholl, who studies at Pardee RAND Graduate School in Santa Monica, California.
Keller spent time here at GMU working with Robin Hanson and hanging out with the lunch gang. Way to go Keller! Thank you!
The rest of the article uncritically repeats the usual claims from so-called “bioethicists” that human challenge trials (HCTs) are unethical because they involve risks. Of course, HCTs carry risks—so what? Randomized controlled trials (RCTs) also require that participants are exposed to risk. Indeed, for participants in the placebo arm of an RCT, the risks are identical. Furthermore, since RCTs require more participants to achieve statistical validity than HCTs, they must expose more people to harm and, as a result, it’s even possible that more participants are harmed in an RCT than an HCT. Thus, HCTs are not necessarily more risky to participants than RCTs and, of course, to the extent that they speed up results, they can save many lives and greatly reduce risk to everyone else in the the larger society.
In my talk, The Economic Way of Thinking in a Pandemic (starting around 10:52, though the entire presentation is worthwhile), I explain the real reason why bioethicists and physicians hesitate over human challenge trials: they fear feeling personally responsible if a participant is harmed. “We exposed this person to risk, and they died.” Well, yes. But my response is, it’s not about you! Set aside personal emotions and focus on what saves the most lives.
Hat tip: Alexander Berger who pointed to this story that I had missed earlier.
MR Podcast: Insurance!
In our new Marginal Revolution Podcast Tyler and I talk insurance, the history of insurance, the economics of insurance, the prospects for new types of insurance and more. Did you know that life insurance was once considered repugnant and was often illegal?
Tyler and I were both surprised how little good work there is on insurance. Here’s Tyler:
[Y]ou look at microeconomic theory. You feel all insurance should be a simple thing. There is risk aversion. You buy the contract. You look at the actual history. It’s very hard to make sense of it. The more I learned, I found the more questions I had. I didn’t fall into some, oh, now I understand what was happening kind of pattern. And the second is simply, I had been underrating Charles Ives. He was more than a great composer. Those are my takeaways.
Here’s one more bit:
COWEN: I want to get to the big, big question about insurance and see what you think. This is my worry. My worry is the agency problems behind insurance never have been solved.
….TABARROK: It is a peculiar market in the sense that all of your revenues come early.
COWEN: That’s right.
TABARROK: You’re selling all of this insurance, and everything is great because all of the money is coming in and your costs don’t come until much, much later. Your customers need to be convinced that you’re going to be around for a long time and are going to fulfill these implicit debts. Which is one reason insurance companies like to have big buildings with giant columns, like banks, to make them look solid. How do we guarantee that? I absolutely agree that’s a huge problem. I hate to say, but, there is a lot of insurance regulation which is precisely meant to deal with this problem.
COWEN: At the state level, you can choose the state. There’s reinsurance through Bermuda or other locales….[But] the problem is not just the company, it’s the person buying the insurance. You could have an insurance company. They advertise, we hold only T-bills and you know they’re safe. People don’t want that. It’s not what I would want. I want the riskier life insurance to get a higher return on the package.
The fact that it’s not for me, makes it really easy to spend for something that promises higher return. They don’t pay it all off, or oh, whatever, but I’ll be dead then, and you don’t think that explicitly. But your ability to monitor the true safety is maybe fairly weak. Maybe it’s efficient to have a bunch of these not pay off, and you get the higher yields on average. You don’t want full safety in most spheres of human existence. The real risk is that you die, right?
TABARROK: If anything, the insurance markets have becoming safer over time because as they get larger, law of large numbers does mean that the risk falls.
COWEN: Assets are more and more correlated over time, I would say.
TABARROK: Well, so we have reinsurance…
COWEN: It’s not that everyone’s going to die at once. The problem is the assets all go crazy at the same time. The world’s more globalized, the gains in the S&P 500 have been concentrated in seven or eight stocks lately. There’s a lot of worrying signs on the asset side, this higher correlation and the law of large numbers is working against you. Fewer publicly traded companies. A place like China is not really somewhere you’re going to be investing in. Maybe you would have thought that 15 years ago. It seems to me going in the wrong direction.
Subscribe now to take a small step toward a much better world: Apple Podcasts | Spotify | YouTube.
Will Trump Appoint a Great FDA Commissioner?
A German newspaper asked for my take on the nomination of RFK Jr. to head HHS. Here’s what I said:
Operation Warp Speed stands as the crowning achievement of the first Trump administration, exemplifying the impact of a bold public-private partnership. OWS accelerated vaccine development, production, and distribution beyond what most experts thought possible, saving hundreds of thousands of American lives and demonstrating the power of American ingenuity in a time of crisis.
By nominating Robert F. Kennedy Jr., a prominent anti-vaccine activist, President Trump undermines his own legacy, and casts doubt on his administration’s commitment to protecting American lives through science-driven health policy.
Many better choices are available. Here is my 2017 post on potential people to head the FDA, many of which would also be great at HHS. No indent. Key points remain true.
As someone who has written about FDA reform for many years it’s gratifying that all of the people whose names have been floated for FDA Commissioner would be excellent, including Balaji Srinivasan, Jim O’Neill, Joseph Gulfo, and Scott Gottlieb. Each of these candidates understands two important facts about the FDA. First, that there is fundamental tradeoff–longer and larger clinical trials mean that the drugs that are approved are safer but at the price of increased drug lag and drug loss. Unsafe drugs create concrete deaths and palpable fear but drug lag and drug loss fill invisible graveyards. We need an FDA commissioner who sees the invisible graveyard.
Each of the leading candidates also understands that we are entering a new world of personalized medicine that will require changes in how the FDA approves medical devices and drugs. Today almost everyone carries in their pocket the processing power of a 1990s supercomputer. Smartphones equipped with sensors can monitor blood pressure, perform ECGs and even analyze DNA. Other devices being developed or available include contact lens that can track glucose levels and eye pressure, devices for monitoring and analyzing gait in real time and head bands that monitor and even adjust your brain waves.
The FDA has an inconsistent even schizophrenic attitude towards these new devices—some have been approved and yet at the same time the FDA has banned 23andMe and other direct-to-consumer genetic testing companies from offering some DNA tests because of “the risk that a test result may be used by a patient to self-manage”. To be sure, the FDA and other agencies have a role in ensuring that a device or test does what it says it does (the Theranos debacle shows the utility of that oversight). But the FDA should not be limiting the information that patients may discover about their own bodies or the advice that may be given based on that information. Interference of this kind violates the first amendment and the long-standing doctrine that the FDA does not control the practice of medicine.
Srinivisan is a computer scientist and electrical engineer who has also published in the New England Journal of Medicine, Nature Biotechnology, and Nature Reviews Genetics. He’s a co-founder of Counsyl, a genetic testing firm that now tests ~4% of all US births, so he understands the importance of the new world of personalized medicine.
The world of personalized medicine also impacts how new drugs and devices should be evaluated. The more we look at people and diseases the more we learn that both are radically heterogeneous. In the past, patients have been classified and drugs prescribed according to a handful of phenomenological characteristics such as age and gender and occasionally race or ethnic background. Today, however, genetic testing and on-the-fly examination of RNA transcripts, proteins, antibodies and metabolites can provide a more precise guide to the effect of pharmaceuticals in a particular person at a particular time.
Greater targeting is beneficial but as Peter Huber has emphasized it means that drug development becomes much less a question of does this drug work for the average patient and much more about, can we identify in this large group of people the subset who will benefit from the drug? If we stick to standard methods that means even larger and more expensive clinical trials and more drug lag and drug delay. Instead, personalized medicine suggests that we allow for more liberal approval decisions and improve our techniques for monitoring individual patients so that physicians can adjust prescribing in response to the body’s reaction. Give physicians a larger armory and let them decide which weapon is best for the task.
I also agree with Joseph Gulfo (writing with Briggeman and Roberts) that in an effort to be scientific the FDA has sometimes fallen victim to the fatal conceit. In particular, the ultimate goal of medical knowledge is increased life expectancy (and reducing morbidity) but that doesn’t mean that every drug should be evaluated on this basis. If a drug or device is safe and it shows activity against the disease as measured by symptoms, surrogate endpoints, biomarkers and so forth then it ought to be approved. It often happens, for example, that no single drug is a silver bullet but that combination therapies work well. But you don’t really discover combination therapies in FDA approved clinical trials–this requires the discovery process of medical practice. This is why Vincent DeVita, former director of the National Cancer Institute, writes in his excellent book, The Death of Cancer:
When you combine multidrug resistance and the Norton-Simon effect , the deck is stacked against any new drug. If the crude end point we look for is survival, it is not surprising that many new drugs seem ineffective. We need new ways to test new drugs in cancer patients, ways that allow testing at earlier stages of disease….
DeVita is correct. One of the reasons we see lots of trials for end-stage cancer, for example, is that you don’t have to wait long to count the dead. But no drug has ever been approved to prevent lung cancer (and only six have ever been approved to prevent any cancer) because the costs of running a clinical trial for long enough to count the dead are just too high to justify the expense. Preventing cancer would be better than trying to deal with it when it’s ravaging a body but we won’t get prevention trials without changing our standards of evaluation.
Jim O’Neill, managing director at Mithril Capital Management and a former HHS official, is an interesting candidate precisely because he also has an interest in regenerative medicine. With a greater understanding of how the body works we should be able to improve health and avoid disease rather than just treating disease but this will require new ways of thinking about drugs and evaluating them. A new and non-traditional head of the FDA could be just the thing to bring about the necessary change in mindset.
In addition, to these big ticket items there’s also a lot of simple changes that could be made at the FDA. Scott Alexander at Slate Star Codex has a superb post discussing reciprocity with Europe and Canada so we can get (at the very least) decent sunscreen and medicine for traveler’s diarrhea. Also, allowing any major pharmaceutical firm to produce any generic drug without going through a expensive approval process would be a relatively simply change that would shut down people like Martin Shkreli who exploit the regulatory morass for private gain.
The head of the FDA has tremendous power, literally the power of life and death. It’s exciting that we may get a new head of the FDA who understands both the peril and the promise of the position.
Signaling Quality in Crowdfunding Projects with Refund Bonuses
My latest paper, Signaling Quality: How Refund Bonuses Can Overcome Information Asymmetries in Crowdfunding (with the excellent Tim Cason and Robertas Zubrickas) is just published in Management Science.
Many promising crowdfunding projects fail due to a fundamental issue: trust. Potential backers often hesitate because they lack confidence in the credibility or viability of the projects. This gap is natural, as traditional bank financing involves a bank acting as an intermediary, vetting the project, assessing its risk, and effectively endorsing it with their reputation. In contrast, crowdfunding operates without such intermediaries. Backers rely on limited, often one-sided information provided by project creators, making it challenging to assess risks or validate claims. Unlike banks, which can access financial records, credit histories, and industry expertise, individual backers typically lack the time, resources, or skills to conduct rigorous due diligence. Moreover, assessing risk is expensive. So how can we convey information about the true value of a crowdfunding project to investors?
Here my co-authors and I turn to refund bonuses. We have previously shown in lab experiments that refund bonuses can dramatically increase the rate of success of crowdfunding contracts and, more generally, make it possible to produce public goods privately. The idea of a refund bonus is simple. In an ordinary Kickstarter-like contract, if a project fails to raise enough funds to reach its threshold, the funds are returned to the investors. In a refund bonus contract, if a project fails to reach its threshold the investors get their money back plus a refund bonus. The effect of the refund bonus is to make investing in socially valuable projects a no-lose proposition. Either the project succeeds which is great because the project is worth more than its cost or it fails and you get a refund bonus. The investor is better off either way.
Now consider the refund bonus from the point of view of the entrepreneurs. An entrepreneur who offers a refund bonus has a special reason to want their project to succeed, namely, if the project succeeds they don’t have to pay the refund bonus. Entrepreneurs know more about the quality of their project than investors. The entrepreneurs, for example, know the truth about their advertising campaign. Does the cool demo really work or was it puffery or worse? Entrepreneurs who offer refund bonuses are thus implicitly offering a kind of testament or bond–I am so confident that this project will succeed that I am willing to offer a refund bonus if it doesn’t succeed. As with a warrantee, the point of the warrantee is not that consumers will use it but that they won’t. The warrantee is a signal of quality. Similarly, we show that offering refund bonuses can signal quality.
Working out the equilibrium requires some game theory because if refund bonuses 100% guaranteed high-quality (i.e. if only entrepreneurs with high quality projects offered refund bonuses) then every project that offered refund bonuses would succeed but then entrepreneurs with lower quality projects wouldn’t fear offering refund bonuses. Thus, the equilibrium is mixed, all entrepreneurs with high quality projects offer refund bonuses but some entrepreneurs with low quality projects also offer refund bonuses. Nevertheless, the equilibrium is such that on average refund bonuses signal quality. We test the theory in a lab experiment and it works. Investors were significantly more likely to put their money into projects where the entrepreneurs chose to offer refund bonuses (n.b. this is in comparison to experiments where refund bonuses were imposed, i.e. we specifically test the signaling role of refund bonuses.)
Thus, refund bonus for crowdfunding provide a decentralized method of reducing asymmetric information. The refund bonus credibly allows information about quality to be transmitted from the entrepreneur to the investors. The bottom line is that refund bonuses increase the power of crowdfunding finance making it more competitive with intermediated finance.
Addendum: Here is an excellent podcast on refund bonuses and crowdfunding. “Refund bonuses could revolutionize crowd funding!”