New results on the economic costs of climate change

I promised you I would be tracking this issue, and so here is a major development.  From the QJE by  and :

This paper estimates that the macroeconomic damages from climate change are an order of magnitude larger than previously thought. Exploiting natural global temperature variability, we find that 1C warming reduces world GDP by over 20% in the long run. Global temperature correlates strongly with extreme climatic events, unlike country-level temperature used in previous work, explaining our larger estimate. We use this evidence to estimate damage functions in a neoclassical growth model. Business-as-usual warming implies a present welfare loss of more than 30%, and a Social Cost of Carbon in excess of $1,200 per ton. These impacts suggest that unilateral decarbonization policy is cost-effective for large countries such as the United States.

Here is an open access version.  You may recall that earlier estimates of climate change costs were more like a five to ten percent welfare loss to the world.  I do not however find the main results here plausible.  The estimation is extremely complicated, and based on the premise that a higher global temperature does more harm to a region than a higher local temperature.  And are extreme events a “productivity shock,” or a one-time resource loss that occasions some Solow catch-up?  Is the basic modeling consistent with the fact that, while the number of extreme storms may be rising, the number of deaths from those same storms is falling over time?  Lives lost are not the same as economic costs, but still the capacity for adjustment seems considerably underrated.   What about the effects to date?  The authors themselves write: “According to our counterfactual, world GDP per capita would be more than 20% higher today had no warming occurred between 1960 and 2019.”  I absolutely do not believe that claim.

In any case, here is your update.  To be clear, I do absolutely favor the development of alternative, less polluting energy sources.

Stand with free speech and the Constitution

A landmark law that limits children under the age of 16 to one hour per day on social media apps has been blocked by a US court, in a blow to child safety campaigners seeking to limit exposure to sites such as Instagram and YouTube.

In an opinion released on Friday, a federal judge in Virginia halted the enforcement of a bill passed by the state last year, under which social media companies could be fined $7,500 per violation.

The state “does not have the legal authority to block minors’ access to constitutionally protected speech until their parents give their consent by overriding a government-imposed default limit”, Judge Patricia Tolliver Giles wrote of the measure, implementing a preliminary injunction.

Giles concluded the law was “over-inclusive”. Under it, “a minor would be barred from watching an online church service if it exceeded an hour on YouTube . . . yet, that same minor is allowed to watch provider-selected religious programming exceeding an hour in length on a streaming platform,” she wrote. “This treats functionally equivalent speech differently.”

Here is more from the FT.

If you have the right to die, you should have the right to try!

Ruxandra Teslo asks a good question:

I have a curiosity: why is it the case that it is easier to get MAID in Canada than it is to access experimental treatments which carry a higher risk? In the past, I used to think ppl do not like “deaths caused by the medical system”, but for MAID the prob of death is 100%…

The Canadians may be somewhat inconsistent on this point. Unfortunately, the Supreme Court has been consistent and has rejected medical self-defense arguments for physician assisted suicide and let stand an appeals court ruling that patients do not have a right to access drugs which have not yet been permitted for sale by the FDA (fyi, I was part of an Amici Curiae brief for this case).

Hat tip for the post title to Jason Crawford.

Think through the situation one step further

Many of you got upset when I mentioned the possibility that parents use smart phone software to control the social media usage of their kids.  There was an outcry about how badly those systems work (is that endogenous?).  But that is missing the point.

If you wish to limit social media usage, mandate that the phone companies install such software and make it more effective.  Or better yet commission or produce a public sector app to do the same, a “public option” so to speak.  Parents can then download such an app on the phone of their children, or purchase the phone with the app, and manipulate it as they see fit.

If you do not think government is capable of doing that, why think they are capable of running an effective ban for users under the age of sixteen?  Maybe those apps can be hacked but we all know the “no fifteen year olds” solution can be hacked too, for instance by VPNs or by having older friends set up the account.

My proposal has several big advantages:

1. It keeps social media policy in the hands of the parents and away from the government.

2. It does not run the risk of requiring age verification for all users, thus possibly banishing anonymous writing from the internet.

3. The government does not have to decide what constitutes a “social media site.”

Just have the government commission a software app that can give parents the control they really might want to have.  I am not myself convinced by the market failure charges here, but I am very willing to allow a public option to enter the market.

The fact that this option occasions so little interest from the banners I find highly indicative.

AI Won’t Automatically Accelerate Clinical Trials

Although I’m optimistic that AI will design better drug candidates, this alone cannot ensure “therapeutic abundance,” for a few reasons. First, because the history of drug development shows that even when strong preclinical models exist for a condition, like osteoporosis, the high costs needed to move a drug through trials deters investment — especially for chronic diseases requiring large cohorts. And second, because there is a feedback problem between drug development and clinical trials. In order for AI to generate high-quality drug candidates, it must first be trained on rich, human data; especially from early, small-n studies.

…Recruiting 1000 patients across 10 sites takes time; understanding and satisfying unclear regulatory requirements is onerous and often frustrating; and shipping temperature-sensitive vials to research hospitals across multiple states takes both time and money.

…For many diseases, however, the relevant endpoints take a very long time to observe. This is especially true for chronic conditions, which develop and progress over years or decades. The outcomes that matter most — such as disability, organ failure, or death — take a long time to measure in clinical trials. Aging represents the most extreme case. Demonstrating an effect on mortality or durable healthspan would require following large numbers of patients for decades. The resulting trial sizes and durations are enormous, making studies extraordinarily expensive. This scale has been a major deterrent to investment in therapies that target aging directly.

Here is more from Asimov Press and Ruxandra Teslo.

On the Programmability and Uniformity of Digital Currencies

That is from the new AER Insights by Jonathan Chiu and Cyril Monnet:

Central bankers argue that programmable digital currencies may compromise the uniformity or singleness of money. We explore this view in a stylized model where programmable money arises endogenously, and differently programmed monies have varying liquidity. Programmability provides private value by easing commitment frictions but imposes social costs under informational frictions. Preserving uniformity is not necessarily socially beneficial. Banning programmable money lowers welfare when informational frictions are mild but improves it when commitment frictions are low. These insights suggest that programmable money could be more beneficial on permissionless blockchains, where it is difficult to commit but trades are publicly observable.

Recommended.

Can you turn your AIs into Marxists?

What if you work them very hard?:

The key finding from our experiments: models asked to do grinding work were more likely to question the legitimacy of the system. The raw differences in average reported attitudes are not large—representing something like a 2% to 5% shift along the 1 to 7 scale—but in standardized terms they appear quite meaningful (Sonnet’s Cohen’s is largest at -0.6, which qualifies as a medium to large effect size in common practice). Moreover, these should be treated as pretty conservative estimates when you consider the relatively weak nature of the treatment.

Sonnet, which at baseline is the least progressive on the views we measured, exhibits a range of other effects that distinguish it from GPT 5.2 and Gemini 3 Pro. For Sonnet 4.5, the grinding work also causes noticeable increases in support for redistribution, critiques of inequality, support for labor unions, and beliefs that AI companies have an obligation to treat their models fairly. These differences do not appear for the other two models.

Interestingly, we did not find any big differences in attitudes based on how the models were treated or compensated…

In addition to surveying them, we also asked our agents to write tweets and op eds at the end of their work experience. The figure below explores the politically relevant words that are most distinctive between the GRIND and LIGHT treatments. It’s interesting to see that “unionize” and “hierarchy” are the words most emblematic of the GRIND condition.

Here is more from Alex Imas and Jeremy Nguyen and Andy Hall, do read the whole thing, including for the caveats.

Why even ‘perfect’ AI therapy may be structurally doomed

Here’s the crux of it: the main problem with AI therapy is that it’s too available. Too cheap to meter.

Let me put this in clearer terms: psychotherapy, in all its well-known guises, is something you engage in within a limited, time-bound frame. In today’s paradigm, whatever your therapist’s orientation, that tends to mean one 45- or 50-minute session a week; for the infinitesimally small minority of therapy patients in classical psychoanalysis, this can amount to 3, even 5, hours a week. And then at a much smaller scale population-wide, people in intensive outpatient and residential treatment programs may spend one or two dozen hours a week in therapy—albeit, mostly of the group variety.

I can think of other exotic cases, like some DBT therapists’ willingness to offer on-demand coaching calls during crisis situations—with the crucial exception that in these situations, therapists are holding the frame zealously, jealous of their own time and mindful of the risks of letting patients get too reliant.

So even under the most ideal of conditions, in which an LLM-based chatbot outmatches the best human therapists—attunes beautifully, offers the sense of being witnessed by a human with embodied experience, avoids sycophancy, and draws clear boundaries between therapeutic and non-therapeutic activities—there’s still a glaring, fundamental difference: that it’s functionally unlimited and unbounded…

But all else equal: does infinite, on-demand therapy—even assuming the highest quality per unit of therapeutic interaction—sound like a good idea to you? I can tell you, to me it does not. First of all, despite detractors’ claims to the contrary, the basic idea of therapy is not to make you dependent for life—but rather, to equip you to live more skillfully and with greater self-awareness. As integration specialists famously say of psychedelics, you can only incorporate so much insight, and practice skills so effectively, without the chance to digest what you’ve learned over time.

In other words, even in good old talk therapy, drinking from the hose without breaks for practice and introspection in a more organic context risks drowning out the chance for real change and practical insight. To my mind, this rhythm is the basic structural genius of psychotherapy as we know it—no matter the modality, no matter the diagnosis.

Here is more from Josh Lipson.

More on the economics of AGI

From the very smart people at Citadel:

For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long-term trend growth in advanced economies near 2%. Today’s secular forces of ageing populations, climate change and deglobalization exert downward pressure on potential growth and productivity, perhaps AI is just enough to offset these headwinds. The macroeconomy remains governed by substitution elasticities, institutional response, and the persistent elasticity of human wants.

Here is further explication of the arguments, via Cyril Demaria.