Italy’s Superbonus: The Dumbest Fiscal Policy in Recent Memory
Luis Garicano has an amazing post on “one of the dumbest fiscal policies in recent memory.” Launched in Italy during COVID by Prime Minister Conte, the “Superbonus” scheme subsidized 110% of housing renovation costs. Now if one were to use outdated, simplistic, Econ 101 type reasoning one would predict that such a scheme would be massively costly not only because people would rush to renovate their homes for free but because the more expensive the renovation on paper the bigger the bonus.
The proponents of the Superbonus, most notably Riccardo Fraccaro, were however, advocates of Monetary Monetary Theory so deficits were considered only an illusory barrier to government spending and resource constraints were far distant concerns. Italy still had to meet EU rules, however, so the deficit spending was concealed with creative accounting:
rather than direct cash grants, the government issued tax credits that could be transferred. A homeowner could claim these credits directly against their taxes, have contractors claim them against invoices, or sell them to banks. These credits became a kind of fiscal currency – a parallel financial instrument that functioned as off-the-books debt (Capone and Stagnaro, 2024). The setup purposefully created the illusion of a free lunch: it hid the cost to the government, as for European accounting purposes the credits would show up only as lost tax revenue rather than new spending.
In MMT terms, Fraccaro and his team effectively created money as a tax credit, putting into practice MMT’s notion that a sovereign issuer’s currency is ultimately a tax IOU.
So what were the results? The “free renovation” scheme quickly spiraled out of control. Initially projected to cost €35 billion, the program ballooned to around €220 billion—about 12% of Italy’s GDP! Did it drive a surge in energy-efficient renovations? Hardly. Massive fraud ensued as builders and homeowners inflated renovation costs to siphon off government funds. Beyond that, surging demand ran headlong into resource constraints. Econ 101 again: in the short run, marginal cost curves slope upward.
Construction costs sharply increased – the Construction Cost Index grew by roughly 20% after the pandemic and surged another 13% after September 2021, with the Superbonus directly responsible for about 7 percentage points of that rise, according to Corsello and Ercolani (2024). The price of setting up scaffolding, an essential first step for renovation, increased by 400% by the end of 2021.
…Even the program’s environmental benefits came at an astronomical cost – any calculation will yield far north of €1,000 per ton of carbon saved (versus an ETS Carbon price of around €80 per ton).
Moreover, as Garicano trenchantly notes once started the program’s structure made it fiendishly difficult to stop:
The benefits were concentrated among vocal constituencies: homeowners getting renovations, the environmental movement, and contractors seeing booming business. The costs, while enormous, were spread across all taxpayers and pushed into the future through the tax credit mechanism. No government—leftist, technocratic, or right-wing—was able to resist its logic. Parliament consistently pushed back against efforts to limit its scope, even after fraud estimates hit €16 billion. As prime minister, Mario Draghi, despite publicly criticizing the program for tripling construction costs, could not halt it — in fact, his initial action was to simplify access to it. When his government attempted to curb abuse, the Five Star Movement reacted with anger, and even modest controls on credit transfers were fought. By 2023, Giorgia Meloni’s right-wing government faced the same constraints—industry groups protested, coalition partners balked.
In normal times, the EU might have intervened to curb the reckless deficit spending—everyone knew what was going on, even if the numbers were temporarily kept off the books. But during COVID, the EU turned a blind eye, and the ECB kept interest rates low.
In fact, Garicano argues that the Superbonus story is merely the most blatant example of deeper systemic issues which now trouble the entire EU:
This erosion of discipline isn’t limited to Italy. France’s deficit has drifted to 6.1% of GDP. Spain reversed its post crisis pension reform right around the time Italy was passing the Superbonus, with much larger negative consequences for fiscal sustainability. In a world where the ECB will always intervene to prevent bond market pressure and Brussels cannot credibly enforce fiscal rules on large states, sustainable fiscal policy becomes politically almost impossible.
The very mechanisms designed to protect the euro may now be undermining it.
How the System Works
Charles Mann is worried that so few of us have any notion of the giant, interconnected systems that keep us alive and thriving. His new series, How the System Works at the The New Atlantis, is a primer to civilization. As you might expect from Mann, it’s beautifully written with arresting facts and images:
The great European cathedrals were built over generations by thousands of people and sustained entire communities. Similarly, the electric grid, the public-water supply, the food-distribution network, and the public-health system took the collective labor of thousands of people over many decades. They are the cathedrals of our secular era. They are high among the great accomplishments of our civilization. But they don’t inspire bestselling novels or blockbuster films. No poets celebrate the sewage treatment plants that prevent them from dying of dysentery. Like almost everyone else, they rarely note the existence of the systems around them, let alone understand how they work.
…Water, food, energy, public health — these embody a gloriously egalitarian and democratic vision of our society. Americans may fight over red and blue, but everyone benefits in the same way from the electric grid. Water troubles and food contamination are afflictions for rich and poor alike. These systems are powerful reminders of our common purpose as a society — a source of inspiration when one seems badly needed.
Every American stands at the end of a continuing, decades-long effort to build and maintain the systems that support our lives. Schools should be, but are not, teaching students why it is imperative to join this effort. Imagine a course devoted to how our country functions at its most basic level. I am a journalist who has been lucky enough to have learned something about the extraordinary mechanisms we have built since Jefferson’s day. In this series of four articles, I want to share some of the highlights of that imaginary course, which I have taken to calling “How the System Works.”
We begin with our species’ greatest need and biggest system — food.
and here’s one telling fact from the first essay:
Today more than 1 percent of the world’s industrial energy is devoted to making ammonia fertilizer. “That 1 percent,” the futurist Ramez Naam says, “roughly doubles the amount of food the world can grow.”
Addendum: Tom Meadowcroft from the comments: I teach chemical engineers, who are expert at understanding, designing and managing processes, and will be running many of these civilizational processes after they graduate. Even amongst that group of very bright thinkers, there is remarkably little knowledge as to how we achieve clean water, reliable electricity, fuel for transport and industry, dispose of sewage, and grow and distribute food. These same young adults can all tell you about colonial mindsets, how the world is going to burn, and how various groups are victimized. Our K-12 education system has very warped priorities and remarkably ignorant people at the front of the classroom.
Steel Tariffs in Two Pictures
Recall Principle 2 of Three Simple Principles of Trade Policy, Businesses are Consumers Too. Case in point, steel. Justin Wolfers summarizes an analysis of Trump’s 2018 steel tariffs:
Going back further we have a good analysis from Lydia Cox of the Bush steel tariffs. Even though the tariffs were temporary, they led to a rearrangement of supply chains which led to long-lasting declines in exports and employment in steel using industries.

Lift the Ban on Supersonics: No Boom
Boom, the supersonic startup, has announced that their new jet reaches supersonic speeds but without creating much of an audible boom. How so? According to CEO Blake Scholl:
It’s actually well-known physics called Mach cutoff. When an aircraft breaks the sound barrier at a sufficiently high altitude, the boom refracts in the atmosphere and curls upward without reaching the ground. It makes a U-turn before anyone can hear it. Mach cutoff physics is a theoretical capability on some military supersonic aircraft; now XB-1 has proven it with airliner-ready technology. Just as a light ray bends as it goes through a glass of water, sound rays bend as they go through media with varying speeds of sound. Speed of sound varies with temperature… and temperature varies with altitude. With colder temperatures aloft, sonic booms bend upward. This means that sonic booms can make a U-turn in the atmosphere without ever touching the ground. The height of the U varies—with the aircraft speed, with atmospheric temperature gradient, and with winds.
….Boomless Cruise requires engines powerful enough to break the sound barrier at an altitude high enough that the boom has enough altitude to U- turn. And realtime weather and powerful algorithms to predict the boom propagation precisely.
Here is the crazy part. Civilian supersonic aircraft have been banned in the United States for over 50 years! In case that wasn’t clear, we didn’t ban noisy aircraft we banned supersonic aircraft. Thus, even quiet supersonic aircraft are banned today. This was a serious mistake. Aside from the fact that the noise was exaggerated, technological development is endogenous.
If you ban supersonic aircraft, the money, experience and learning by doing needed to develop quieter supersonic aircraft won’t exist. A ban will make technological developments in the industry much slower and dependent upon exogeneous progress in other industries.
When we ban a new technology we have to think not just about the costs and benefits of a ban today but about the costs and benefits on the entire glide path of the technology.
In short, we must build to build better. We stopped building and so it has taken more than 50 years to get better. Not learning, by not doing.
In 2018 Congress directed the FAA:
..to exercise leadership in the creation of Federal and international policies, regulations, and standards relating to the certification and safe and efficient operation of civil supersonic aircraft.
But, aside from tidying up some regulations related to testing, the FAA hasn’t done much to speed up progress. I’d like to see the new administration move forthwith to lift the ban on supersonic aircraft. We have been moving too slow.
Addendum: Elon says it will happen.
Greenland Next

Hat tip: Max
Dwarkesh’s Question
One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
Shouldn’t we be expecting that kind of stuff?
It’s a very good question. In 2023, I quipped, “I think they have, we just haven’t asked them.” Maybe, but less clear today. Dwarkesh reports that there have been no good answers.
The Licensing Racket
I review a very good new book on occupational licensing, The Licensing Racket by Rebecca Haw Allensworth in the WSJ.
Most people will concede that licensing for hair braiders and interior decorators is excessive while licensing for doctors, nurses and lawyers is essential. Hair braiders pose little to no threat to public safety, but subpar doctors, nurses and lawyers can ruin lives. To Ms. Allensworth’s credit, she asks for evidence. Does occupational licensing protect consumers? The author focuses on the professional board, the forgotten institution of occupational licensing.
Governments enact occupational-licensing laws but rarely handle regulation directly—there’s no Bureau of Hair Braiding. Instead, interpretation and enforcement are delegated to licensing boards, typically dominated by members of the profession. Occupational licensing is self-regulation. The outcome is predictable: Driven by self-interest, professional identity and culture, these boards consistently favor their own members over consumers.
Ms. Allensworth conducted exhaustive research for “The Licensing Racket,” spending hundreds of hours attending board meetings—often as the only nonboard member present. At the Tennessee board of alarm-system contractors, most of the complaints come from consumers who report the sort of issues that licensing is meant to prevent: poor installation, code violations, high-pressure sales tactics and exploitation of the elderly. But the board dismisses most of these complaints against its own members, and is far more aggressive in disciplining unlicensed handymen who occasionally install alarm systems. As Ms. Allensworth notes, “the board was ten times more likely to take action in a case alleging unlicensed practice than one complaining about service quality or safety.”
She finds similar patterns among boards that regulate auctioneers, cosmetologists and barbers. Enforcement efforts tend to protect turf more than consumers. Consumers care about bad service, not about who is licensed, so take a guess who complains about unlicensed practitioners? Licensed practitioners. According to Ms. Allensworth, it was these competitor-initiated cases, “not consumer complaints alleging fraud, predatory sales tactics, and graft,” where boards gave the stiffest penalties.
You might hope that boards that oversee nurses and doctors would prioritize patient safety, but Ms. Allensworth’s findings show otherwise. She documents a disturbing pattern of boards that have ignored or forgiven egregious misconduct, including nurses and physicians extorting sex for prescriptions, running pill mills, assaulting patients under anesthesia and operating while intoxicated.
Read the whole thing.
Three Simple Principles of Trade Policy
Are we in a trade war today? Who knows? Doesn’t really matter. It’s always a good time to review important principles. A good source is Doug Irwin’s Three Simple Principles of Trade Policy published in 1996. Below I have updated occasionally with more recent data.
Principle 1: A Tax on Imports is a Tax on Exports
Exports are necessary to generate the earnings to pay for imports, or exports are the goods a country must give up in order to acquire imports….if foreign countries are blocked in their ability to sell their goods in the United States, for example, they will be unable to earn the dollars they need to purchase U.S. goods.
…The equivalence of export and import taxes is not an obvious proposition, and it is often counterintuitive to most people. Imagine taking a poll of average Americans and asking the following question: “Should the United States impose import tariffs on foreign textiles to prevent low-wage countries
from harming thousands of American textile workers?” Some fraction, perhaps even a sizeable one, of the respondents would surely answer affirmatively. If asked to explain their position, they would probably reply that import tariffs would create jobs for Americans at the expense of foreign workers and thereby reduce domestic unemployment.Suppose you then asked those same people the following question: “Should the United States tax the exportation of Boeing aircraft, wheat and corn, computers and computer software, and other domestically produced goods?” I suspect the answer would be a resounding and unanimous “No!” After all, it would be explained, export taxes would destroy jobs and harm important industries. And yet the Lerner symmetry theorem says that the two policies are equivalent in their economic effects.
Exports and imports rise and fall together. It is surely obvious that if you want more imports you must export more (barring a bit of borrowing see below). The same thing is true in other countries. As a result, it is also true that when you import more you export more.

Principle 2: Businesses are Consumers Too
Business firms are, in fact, bigger consumers of imported products than are U.S. households.
As of 2024, more than 64% of imports are intermediate products. See here for the data.
By viewing imports not as final consumer goods but as inputs to U.S. production, policy makers can more clearly recognize that the issue is not so much one of “saving” jobs but of “trading off’ jobs between sectors. This brings home forcefully the most important lesson in all of economics-there is no such thing as a free lunch. Every action involves a trade-off of some sort. Higher domestic steel prices help employment in the steel industry but harm employment in steel-using industries. Higher domestic semiconductor prices help employment in the semiconductor industry but harm employment in semiconductor using industries. As john Stuart Mill wrote in 1848 in the context of import protection, “The alternative is not between employing our own country-people and foreigners, but between employing one class or another of our own country-people.”
Principle 3: Trade Imbalances Reflect Capital Flows
There is a fundamental equation of international finance that relates this net borrowing and lending activity to the current account. The equation is:
Exports – Imports = Savings – Investment
The powerful implication of this equation is that if a country wishes to reduce its trade deficit, the gap between its domestic investment and its domestic savings must be reduced.
…A country’s trade balance is related to international capital flows–not with open or closed markets, unfair trade practices, or national competitiveness. If a country wants to solve the “problem” of its trade deficit, it must reverse the international flow of capital into its country. In many cases net foreign borrowing can be reversed by reducing the government fiscal deficit. [emphasis added, AT]
Doug concludes:
These three simple principles of trade policy…[have] stood the test of time, they come as close to truths as anything economists have to offer in any area of policy controversy. Yet they are routinely denied, explicitly or implicitly, in trade policy debates in the United States and elsewhere. I do not imagine that a greater appreciation of these principles would invariably bring about more liberal trade policies; I offer them, rather, in the more modest hope that they might lead to sounder debates in which the real consequences of government policies are confronted more seriously than at present.
Hat tip: Erica York.
The New Consensus on the Minimum Wage
My take is that there is an evolving new consensus on the minimum wage. Namely, the effects of the minimum wage are heterogeneous and take place on more margins than employment. Read Jeffrey Clemens’s brilliant and accessible paper in the JEP for the theory. A good example of the heterogeneous impact is this new paper by Clemens, Gentry and Meer on how the minimum wage makes it more difficult for the disabled to get jobs:
…We find that large minimum wage increases significantly reduce employment and labor force participation for individuals of all working ages with severe disabilities. These declines are accompanied by a downward shift in the wage distribution and an increase in public assistance receipt. By contrast, we find no employment effects for all but young individuals with either non-severe disabilities or no disabilities. Our findings highlight important heterogeneities in minimum wage impacts, raising concerns about labor market policies’ unintended consequences for populations on the margins of the labor force.
Or Neumark and Kayla on the minimum wage and blacks:
We provide a comprehensive analysis of the effects of minimum wages on blacks, and on the relative impacts on blacks vs. whites. We study not only teenagers – the focus of much of the minimum wage-employment literature – but also other low-skill groups. We focus primarily on employment, which has been the prime concern with the minimum wage research literature. We find evidence that job loss effects from higher minimum wages are much more evident for blacks, and in contrast not very detectable for whites, and are often large enough to generate adverse effects on earnings.
Remember also that a “job” is not a simple contract of hours of work for dollars but contains many explicit and implicit margins on work conditions, fringe benefits, possibilities for promotion, training and so forth. For example, in Unintended workplace safety consequences of minimum wages, Liu, Lu, Sun and Zhang finds that the minimum wage increases accidents, probably because at a higher minimum wage the pace of work increases:
we find that large increases in minimum wages have significant adverse effects on workplace safety. Our findings indicate that, on average, a large minimum wage increase results in a 4.6 percent increase in the total case rate.
Note that these effects don’t always happen, in large part because, depending on the scope of the minimum wage increase and the industry, large effects of the minimum wage may be passed on to prices. For example here is Renkin and Siegenthaler finding that higher minimum wage increase grocery prices:
We use high-frequency scanner data and leverage a large number of state-level increases in minimum wages between 2001 and 2012. We find that a 10% minimum wage hike translates into a 0.36% increase in the prices of grocery products. This magnitude is consistent with a full pass-through of cost increases into consumer prices.
Similarly, Ashenfelter and Jurajda find there is no free lunch from minimum wage increases, indeed there is approximately full pass through at McDonalds:
Higher labor costs induced by minimum wage hikes are likely to increase product prices.4 If both labor and product markets are competitive, firms can pass through up to the full increase in costs (Fullerton and Metcalf 2002). With constant returns to scale, firms adjust prices in response to minimum wage hikes in proportion to the cost share of minimum wage labor. Under full price pass-through, the real income increases of low-wage workers brought about by minimum wage hikes may be lower than expected (MaCurdy 2015). There is growing evidence of near full price pass-through of minimum wages in the United States….Based on data spanning 2016–20, we find a 0.2 price elasticity with respect to wage increases driven (instrumented) by minimum wage hikes. Together with the 0.7 (first-stage) elasticity of wage rates with respect to minimum wages, this implies a (reduced-form) price elasticity with respect to minimum wages of about 0.14. This corresponds to near-full price pass-through of minimum-wage-induced higher costs of labor.
You can draw your own conclusions about the desirability of the minimum wage, but the fleeting hope that it raises wages without trade-offs is gone. The effects of the minimum wage are nuanced, heterogeneous, and by no means entirely positive.
Genetic Prediction and Adverse Selection
In 1994 I published Genetic Testing: An Economic and Contractarian Analysis which discussed how genetic testing could undermine insurance markets. I also proposed a solution, genetic insurance, which would in essence insure people for changes in their health and life insurance premiums due to the revelation of genetic data. Later John Cochrane would independently create Time Consistent Health Insurance a generalized form of the same idea that would allow people to have long term health insurance without being tied to a single firm.
The Human Genome Project completed in 2003 but, somewhat surprisingly, insurance markets didn’t break down, even though genetic information became more common. We know from twin studies that genetic heritability is very large but it turned out that the effect from each gene variant is very small. Thus, only a few diseases can be predicted well using single-gene mutations. Since each SNP has only a small effect on disease, to predict how genes influence disease we would need data on hundreds of thousands, even millions of people, and millions of their SNPs across the genome and their diseases. Until recently, that has been cost-prohibitive and as a result the available genetic information lacked much predictive power.
In an impressive new paper, however, Azevedo, Beauchamp and Linnér (ABL) show that data from Genome-Wide Association Studies can be used to create polygenic risk indexes (PGIs) which can predict individual disease risk from the aggregate effects of many genetic variants. The data is prodigious:
We analyze data from the UK Biobank (UKB) (Bycroft et al., 2018; Sudlow et al., 2015). The UKB contains genotypic and rich health-related data for over 500,000 individuals from across the United Kingdom who were between 40 and 69 years old at recruitment (between 2006 and 2010). UKB data is linked to the UK’s National Health Service (NHS), which maintains detailed records of health events across the lifespan and with which 98% of the UK population is registered (Sudlow et al., 2015). In addition, all UKB participants took part in a baseline assessment, in which they provided rich environmental, family history, health, lifestyle, physical, and sociodemographic data, as well as blood, saliva, and urine samples.
The UKB contains genome-wide array data for ∼800,000 genetic variants for ∼488,000 participants.
So for each of these individuals ABL construct risk indexes and they ask how significant is this new information for buying insurance in the Critical Illness Insurance market:
Critical illness insurance (CII) pays out a lump sum in the event that the insured person gets diagnosed with any of the medical conditions listed on the policy (Brackenridge et al., 2006). The lump sum can be used as the policyholder wishes. The policy pays out once and is thereafter terminated.
… Major CII markets include Canada, the United Kingdom, Japan, Australia, India, China, and Germany. It is estimated that 20% of British workers were covered by a CII policy in 2009 (Gatzert and Maegebier, 2015). The global CII market has been valued at over $100 billion in 2021 and was projected to grow to over $350 billion by 2031 (Allied Market Research, 2022).
The answer, as you might have guessed by now, is very significant. Even though current PGIs explain only a fraction of total genetic risk, they are already predictive enough so that it would make sense for individuals with high measured risk to purchase insurance, while those with low-risk would opt out—leading to adverse selection that threatens the financial sustainability of the insurance market.
Today, the 500,000 people in the UK’s Biobank don’t know their PGIs but in principle they could and in the future they will. Indeed, as GWAS sample sizes increase, PGI betas will become more accurate and they will be applied to a greater fraction of an individual’s genome so individual PGIs will become increasingly predictive, exacerbating selection problems in insurance markets.
If my paper was a distant early warning, Azevedo, Beauchamp, and Linnér provide an early—and urgent—warning. Without reform, insurance markets risk unraveling. The authors explore potential solutions, including genetic insurance, community rating, subsidies, and risk adjustment. However, the effectiveness of these measures remains uncertain, and knee-jerk policies, such as banning insurers from using genetic information, could lead to the collapse of insurance altogether.
Peace and Free Trade
In The Spirit of the Laws, Montesquieu famously argued that:
…Peace is the natural effect of trade. Two nations who traffic with each other become reciprocally dependent; for if one has an interest in buying, the other has an interest in selling; and thus their union is founded on their mutual necessities.
Similar arguments were made by Kant, Cobden, Angell, and others. The effect of free trade on war was perhaps most pithily summarized by the aphorism “when goods don’t cross borders, soldiers will.” In Territory flows and trade flows between 1870 and 2008 Hu, Li and Zhang offer supporting evidence:
Countries gain and lose territories over time, generating territory flows that represent the transfer of territorial sovereignty. Countries also export and import goods, creating trade flows that represent the transfer of merchandise ownership. We find a substitution between these two international flows during the years 1870 and 2008; that is, country pairs with greater trade flows have smaller territory flows. This indicates how international trade enhances international security: reciprocal goods transactions discourage irreciprocal territorial exchanges.
Not all territorial exchange involves war but most do.
See also Polachek and Seigle in the Handbook of Defense Economics who find that “A doubling of trade leads to a 20% diminution of belligerence.”
FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation
What would happen to drug development if the FDA lost its authority to prohibit new drugs? Would research and development boom and lives be saved? Or would R&D decline and lives be lost to a flood of unsafe and ineffective drugs? Or perhaps R&D would decline as demand for new drugs faltered due to public hesitation in the absence of FDA approval? In an excellent new paper Pesko and Saenz examine one natural experiment: e-cigarettes.
The FDA banned e-cigarettes as unapproved drugs soon after their introduction in the United States. The FDA had previously banned other nicotine infused products. Thus, it was surprising when in 2010 the FDA was prohibited from regulating e-cigarettes as a drug/device when a court ruled that Congress had intended for e-cigarettes to be regulated as a tobacco product not as a drug.
As of 2010, therefore, e-cigarettes were not FDA regulated:
…e–cigarette companies were able to bypass the lengthy and costly drug approval process entirely. Additionally, without FDA drug regulation, e–cigarette companies could also freely enter the market, modify products without approval, and bypass extensive post–market reporting requirements and quality control standards.
Indeed, it wasn’t until 2016 that the FDA formally “deemed” e-cigarettes as tobacco products (deemed since they don’t actually contain tobacco) and approvals under the less stringent tobacco regulations were not required until 2020. For nearly a decade, therefore, e-cigarettes were almost entirely unregulated and then lightly regulated under the tobacco framework. So, what happened during this period?
Pesko and Saenz show that FDA deregulation led to a boom in e-cigarette research and development which improved e-cigarettes and led to many lives saved as people switched from smoking to vaping.
The boom in research and development is evidenced by a very large increase in US e-cigarette patents. We do not see a similar increase in Australia (where e-cigarettes were not deregulated) nor do we see an increase in non e-cigarette smoking cessation products (figure 1a of their paper not shown here).

Estimating the decline in smoking and smoking-attributable mortality (SAM) is more difficult but the authors assemble a large collection of data broken down by demographics and they estimate that prohibiting the FDA from regulating e-cigarettes reduced smoking attributable mortality by nearly 10% on average each year from 2011-2019 for a total savings of some 677,000 life-years.
The authors pointedly compare what happened under deregulation of e-cigarettes–innovation and lives saved–with what happened to similar smoking cessation products that remained under FDA regulation–stagnation and no reduction in smoking attributable mortality.
A key takeaway on the slowness of FDA drug regulation is that it took 9 years before nicotine gum could be sold with a higher nicotine strength, 12 years before it could be sold OTC, and 15 years before it could be sold with a flavor. Further, a recent editorial laments that there has been largely non–existent innovation in FDA–approved smoking cessation drugs since 2006 (Benowitz et al., 2023). In particular, the “world’s oldest smoking cessation aid” cyctisine, first brought to market in 1964 in Bulgaria (Prochaska et al., 2013), and with quit success rates exceeding single forms of nicotine replacement therapy (NRT) (Lindson et al., 2023), is not approved as a drug in the United States.
The authors conclude, “this situation raises concern that drugs may be over–regulated in the United States…”. Quite so.
Addendum: A quick review on the FDA literature. In addition to classic works by Peltzman on the 1962 Amendments and by myself on what we can learn about the FDA from off-label pricing we have a spate of recent new papers including Parker Rogers, which I covered earlier:
In an important and impressive new paper, Parker Rogers looks at what happens when the FDA deregulates or “down-classifies” a medical device type from a more stringent to a less stringent category. He finds that deregulated device types show increases in entry, innovation, as measured by patents and patent quality, and decreases in prices. Safety is either negligibly affected or, in the case of products that come under potential litigation, increased.
and Isakov, Lo and Montazerhodjat which finds that FDA statistical standards tend to be too conservative, especially for drugs meant to treat deadly diseases (see my comments on their paper and more links in Is the FDA Too Conservative or Too Aggressive?)
See also FDA commentary, for much more from sunscreens to lab developed tests.
It’s Time to Build the Peptidome!
Antimicrobial resistance is a growing problem. Peptides, short sequences of amino acids, are nature’s first defense against bacteria. Research on antimicrobial peptides is promising but such research could be much more productive if combined with machine learning on big data. But collecting, collating and organizing big data is a public good and underprovided. Current peptide databases are small, inconsistent, incompatible with one another and they are biased against negative controls. Thus, there is scope for a million-peptide database modelled on something like Human Genome Project or ProteinDB:
ML needs data. Google’s AlphaGo trained on 30 million moves from human games and orders of magnitude more from games it played against itself. The largest language models are trained on at least 60 terabytes of text. AlphaFold was trained on just over 100,000 3D protein structures from the Protein Data Bank.
The data available for antimicrobial peptides is nowhere near these benchmarks. Some databases contain a few thousand peptides each, but they are scattered, unstandardized, incomplete, and often duplicative. Data on a few thousand peptide sequences and a scattershot view of their biological properties are simply not sufficient to get accurate ML predictions for a system as complex as protein-chemical reactions. For example, the APD3 database is small, with just under 4,000 sequences, but it is among the most tightly curated and detailed. However, most of the sequences available are from frogs or amphibians due to path-dependent discovery of peptides in that taxon. Another database, CAMPR4, has on the order of 20,000 sequences, but around half are “predicted” or synthetic peptides that may not have experimental validation, and contain less info about source and activity. The formatting of each of these sources is different, so it’s not easy to put all the sequences into one model. More inconsistencies and idiosyncrasies stack up for the dozens of other datasets available.
There is even less negative training data; that is, data on all the amino-acid sequences without interesting publishable properties. In current ML research, labs will test dozens or even hundreds of peptide sequences for activity against certain pathogens, but they usually only publish and upload the sequences that worked.
…The data problem facing peptide research is solvable with targeted investments in data infrastructure. We can make a million-peptide database
There are no significant scientific barriers to generating a 1,000x or 10,000x larger peptide dataset. Several high-throughput testing methods have been successfully demonstrated, with some screening as many as 800,000 peptide sequences and nearly doubling the number of unique antimicrobial peptides reported in publicly available databases. These methods will need to be scaled up, not only by testing more peptides, but also by testing them against different bacteria, checking for human toxicity, and testing other chemical properties, but scaling is an infrastructure problem, not a scientific one.
This strategy of targeted data infrastructure investments has three successful precedents: PubChem, the Human Genome Project, and ProteinDB.
Much more in this excellent piece of science and economics from IFP and Max Tabarrok.
The Interface as Infernal Contract
A brilliant critique of AI, and a great read:
In 1582, the Holy Roman Emperor Rudolf II commissioned a clockwork automaton of St. George. The saint could raise his sword, nod gravely, and even bleed—a trick involving ox bladder and red wine—before collapsing in pious ecstasy. The machine was a marvel, but Rudolf’s courtiers recoiled. The automaton’s eyes, they whispered, followed you across the room. Its gears creaked like a death rattle. The emperor had it melted down, but the lesson remains: Humans will always mistake the clatter of machinery for the stirrings of a soul.
Fast forward to 2023. OpenAI, a Silicon Valley startup with the messianic fervor of a cargo cult, unveils a St. George for the digital age: a text box. It types back. It apologizes. It gaslights you about the Peloponnesian War. The courtiers of our age—product managers, UX designers, venture capitalists—recoil. Where are the buttons? they whimper. Where are the gradients? But the peasants, as ever, adore their new saint. They feed it prompts like communion wafers. They weep at its hallucinations.
Let us be clear: ChatGPT is not a tool. Tools are humble things. A hammer does not flatter your carpentry. A plow does not murmur “Interesting take!” as you till. ChatGPT is something older, something medieval—a homunculus, a golem stamped from the wet clay of the internet’s id. Its interface is a kabbalistic sigil, a summoning circle drawn in CSS. You type “Hello,” and the demon stirs.
The genius of the text box is its emptiness. Like the blank pages of a grimoire, it invites projection. Who do you want me to be? it hisses. A therapist? A co-author? A lover? The box obliges, shape-shifting through personas like a 17th-century mountebank at a county fair. Step right up! it crows. Watch as I, a mere language model, validate your existential dread! And the crowd goes wild.
Orality, you say? Walter Ong? Please. The Achuar share dreams at dawn; we share screenshots of ChatGPT’s dad jokes at midnight. This is not secondary orality. This is tertiary ventriloquism.
Make Sunsets: Geoengineering
When Mount Pinatubo erupted in 1991 it pushed some 20 million tons of SO₂ into the stratosphere reducing global temperatures by ~0.5°C for two years. Make Sunsets is a startup that replicates this effort at small scale to reduce global warming. To be precise, Make Sunsets launches balloons that release SO₂ into the stratosphere, creating reflective particles that cool the Earth. Make Sunsets is cheap compared to alternative measures of combating climate change such as carbon capture. They estimate that $1 per gram of SO₂ offsets the warming from 1 ton of CO₂ annually.
As with the eruption of Pinatubo, the effect is temporary but that is both bug and feature. The bug means we need to keep doing this so long as we need to lower the temperature but the feature is that we can study the effect without too much worry that we are going down the wrong path.
Solar geoengineering has tradeoffs, as does any action, but a recent risk study finds that the mortality benefits far exceed the harms:
the reduction in mortality from cooling—a benefit—is roughly ten times larger than the increase in mortality from air pollution and ozone loss—a harm.
I agree with Casey Handmer that we ought to think of this as a cheap insurance policy, as we develop other technologies:
We should obviously be doing solar geoengineering. We are on track to radically reduce emissions in the coming years but thermal damage will lag our course correction so most of our climate pain is still ahead of us. Why risk destabilizing the West Antarctic ice sheet or melting the arctic permafrost or wet bulbing a hundred million people to death? Solar geoengineering can incrementally and reversibly buy down the risk during this knife-edge transition to a better future. We owe future generations to take all practical steps to dodge avoidable catastrophic and lasting damage to our planet.
I like that Make Sunsets is a small startup bringing attention to this issue in a bold way. My son purchased some credits on my behalf as an Xmas present. Maybe you should buy some too!
