Category: Web/Tech

The moralization of artificial intelligence

We started by asking how moralized AI has become in public discourse. Analyzing 69,890 news headlines from 2018 to 2024, we found that AI was moralized at levels comparable to GMOs and vaccines, technologies whose moral opposition has been studied for decades. It ranked above both. The sharpest spike came within weeks of ChatGPT’s launch in late 2022.

When we surveyed representative samples of Americans, a majority of AI opponents said their views wouldn’t change even if AI proved safe and beneficial. That’s consequence insensitivity, the hallmark of moral conviction, not practical calculation. Across art, chatbots, legal tools, and romantic companions, AI moralization loaded onto a single latent factor. A global moral stance, dressed up in whatever practical language is available.

The behavioral data make this concrete: a one standard deviation increase in moralization scores predicted a 42% drop in actual AI usage, even when it would have benefited that person personally. The conviction preceded the behavior by up to 573 days.

The next time someone gives you three different reasons to oppose AI, each one dissolving under mild scrutiny, you’re probably not watching someone think. You’re watching someone feel.

Here is the tweet storm, here is the paper by de Mello, et.al.

The trajectories of science and AI

From my podcast with Nebular:

Cowen: Mainly what they have done is tricked people. The Apollo program was a big trick. It was not intended as a trick. I’m pretty sure almost everyone behind it was quite sincere that it would lead to whatever. It was vague all along, but everyone was truly excited back then. I even remember those times, but it didn’t lead to what we were promised at all.

And you see that when you compare science fiction over time. So I think the norm is that new technology comes and people are tricked. Again, it doesn’t have to be a sinister, devious, conspiracy laden thing, but in fact, they’re tricked. And then it happens anyway. And then we clean up the mess and deal with it and move on to the next set of problems.

And that’s what I think it will be with AI as well.

Murphy: What is the trick with AI?

Cowen: It’s the old paradox. When you add grains of sugar to your coffee. Every extra grain is fine, or it may even taste better, but at some point, you’ve just added too many grains. So that’s the way it is with change. People use ChatGPT. It diagnoses your dog. Do I need to take the dog to the vet? What’s with this rash?

You take the photo…You get a great answer. Everyone’s happy. They’re not actually going to be happy at all the changes that will bring. And here I’m talking about positive ones. I’m not saying, oh, it’s going to kill us all. People just don’t like change that much. So they’ll be sold on the immediate, concrete things and end up seeing things happen where they feel there’s too much change because it will devalue their human capital, and we’ll adjust and get over it and move on to the next set of tricks. That’s my forecast.

Murphy: People don’t like change, but also people are bad at long term planning. Yeah. You’ve spoken before about how faith is a key requirement in terms of being able to plan over the long term. How do you bring that idea to policymakers?

Cowen: I don’t know, I think things will get pushed through for myopic reasons, like we must outpace China, which might even be true, to be clear, but it’s a somewhat myopic reason, and that will be the selling point. You know, I’ve read a lot of texts from the early days of the Industrial Revolution. Adam Smith is one of them, but there’s many others, and a lot of people are for what’s going on, they understand they will be richer, maybe healthier.

They do see the downsides, but they have a pretty decent perspective. But no one from then understood. You’d have this second order fossil fuel revolution, say the 1880s where just things explode and the world is very much different. And whether they would have liked that, you can debate, but they just didn’t see it at all.

We’re probably in a somewhat analogous position. I would say that the Second Industrial Revolution was the more important one. It was a very good thing, even though climate change is a big problem, but it really built the modern world. And with something like AI or any advance, there’s probably some second order version of it that’s coming in our equivalent of 1880 that we just don’t see, and it will be wonderful for us.

But if you told us, we’d be terrified. So how should you feel about myopia? I think as an intellectual, you should be willing to talk about it openly and honestly. But at the end of the day, I think myopia still will rule. And I’m not in a big panic about that.

To recap:

We’ve just published the video on YouTubeXSpotify, and Apple Podcasts. We also published some extended show notes and the transcript on Substack.

On social media and parents (from my email)

Fron anonymous:

I personally think social media is pretty bad for people (kids and adults). I got off Facebook around 2009. I never got on Twitter. I had Instagram for a while but only followed my wife to see her posts of our family. This worked great until Instagram started feeding me content beyond the people I was following (really just my wife), so I quit using it. The only social media I currently use is Substack (not sure if that counts?). But the same dynamic may be playing out there as well (the algorithm feeding me stuff I don’t want, and me getting locked into wasting time doom scrolling).

HOWEVER, I completely agree with your point about parents. Our 14-year-old son has an iPhone, but we have locked it down pretty tight. It took some work on our part, to be honest. And we have to be pretty vigilant about enforcing the no-phone-in-your-room rule (which is a source of conflict sometimes). Our son has no social media accounts. He can text and he has access to a few messaging apps that they use at his school. Beyond that, we’ve basically shut down his ability to access the internet on the phone. His Chromebook works perfectly well for any legitimate internet needs.

In principle, any parent can do what we’ve done. So why don’t they? Why are they begging the government to do something they could just do themselves, albeit with a little work? Well, I’ve been struck by how badly many parents desperately need their children’s approval. They find themselves incapable of disappointing or upsetting their children on even the smallest of things. They know they should tell their kids not to use TikTok (or whatever), but they don’t want to make their kids mad. That’s why they want someone else to do it for them.

I don’t get it. Perhaps I’m overly cranky, but I honestly don’t mind it if (when) my kids get mad when I do something I believe is in their best interest. I simply don’t believe my children’s emotional reaction is a very good guide to parenting. Because they’re children. And they don’t know very much. And they especially don’t know what they don’t know and that’s why I’m here. If I won’t tell my kids no when they need to hear it but don’t want to hear it, then what good am I? My wife feels the same way. But we see lots of families that clearly feel differently.

Okay rant over.

See also Arnold Kling on related ideas.

A simple model of AI governance

I trust private companies with strong AI more than I trust the government, regardless of which administration is in power.  Yet if the federal government feels it has no say or no control, it will lunge and take over the whole thing.  We thus want sustainble methods of perpetual interference that a) are actually somewhat useful from a safety perspective, and b) give governments some control, and the feeling of control, but not too much control.

You should judge AI-related events within this framework.

What the recent dust-up means for AI regulation

From my new Free Press column, I see these as the most important facts:

Congress has not passed explicit regulation of AI foundation models, and an executive order from President Trump limited regulation at the state level. But do not think that laissez-faire reigns. In addition to existing (largely pre-AI) laws, which lay out general principles of liability, and laws from a few states, the United States is engaged in a kind of “off the books” soft regulation.

The major AI companies keep the national security establishment apprised of the progress they are making, as has been the case with Anthropic. There is a general sense within the AI industry that if the national security authorities saw anything in the new products that was very concerning or that might undermine the national interest, they would inform the president and Congress. That would likely lead to more formal and more restrictive kinds of regulation, so the major AI companies want to show relatively safe demos and products. An informal back and forth enforces implied safety standards, without the involvement of formal legislation.

That may sound like an unusual way to do regulation, but to date the system has worked relatively well. For one thing, I believe our national security establishment has a better and more sophisticated understanding of the issues than does Congress. Congress right now simply isn’t up to the job, as indeed the institution has been failing more generally. Most representatives seem to know little about the core issues behind AI regulation.

As it stands, AI progress has been allowed to proceed, and the United States has stayed ahead of China, without major catastrophes. The burden on the companies has been manageable, and the system, at least until last week, was flexible.

Another advantage of this system is that both Congress and the administrative state can be very slow to act. The AI landscape can change in just weeks, yet our federal government is used to taking years to issue laws and directives. Had we passed AI legislation in, say, 2024, today it would be badly out of date, no matter what your point of view on what such regulation should accomplish. For instance, in 2024 few outsiders were much concerned with the properties of, or risks from, autonomous AI “agents.” Today that is the number-one topic of concern.

Though it is not driven by legislation, the status quo AI regulatory system is not anti-democratic, as it operates well within the rules passed by Congress and the administrative state. It is more correct to say the current AI guardrails rely on the threat of regulation, rather than regulation itself, with the national security state as the watchdog. The system sticks to a kind of creative ambiguity. The national security state offers no official imprimatur for the new advances, but they proceed nonetheless. Nevertheless, the various components of the national security state reserve the right to object in the future.

It is also correct, however, to believe that such a system cannot last forever. At some point creative ambiguity collapses. Someone or some institution demands a more formal answer as to what is allowed or what is not allowed. At that point a more directly legalistic system of adjudication enters the picture, and Congress likely starts paying more attention.

With the recent dispute between Hegseth and Anthropic, we have taken a step away from the previous regulatory mode of quiet cooperation. Instead, the relationship between the military and the AI companies has become a matter of public concern. Now everyone has an opinion on Hegseth, Anthropic, and OpenAI, and social media is full of debate.

No matter “whose side you take,” it would have been better to have resolved all this behind closed doors.

Think through the situation one step further

Many of you got upset when I mentioned the possibility that parents use smart phone software to control the social media usage of their kids.  There was an outcry about how badly those systems work (is that endogenous?).  But that is missing the point.

If you wish to limit social media usage, mandate that the phone companies install such software and make it more effective.  Or better yet commission or produce a public sector app to do the same, a “public option” so to speak.  Parents can then download such an app on the phone of their children, or purchase the phone with the app, and manipulate it as they see fit.

If you do not think government is capable of doing that, why think they are capable of running an effective ban for users under the age of sixteen?  Maybe those apps can be hacked but we all know the “no fifteen year olds” solution can be hacked too, for instance by VPNs or by having older friends set up the account.

My proposal has several big advantages:

1. It keeps social media policy in the hands of the parents and away from the government.

2. It does not run the risk of requiring age verification for all users, thus possibly banishing anonymous writing from the internet.

3. The government does not have to decide what constitutes a “social media site.”

Just have the government commission a software app that can give parents the control they really might want to have.  I am not myself convinced by the market failure charges here, but I am very willing to allow a public option to enter the market.

The fact that this option occasions so little interest from the banners I find highly indicative.

AI Won’t Automatically Accelerate Clinical Trials

Although I’m optimistic that AI will design better drug candidates, this alone cannot ensure “therapeutic abundance,” for a few reasons. First, because the history of drug development shows that even when strong preclinical models exist for a condition, like osteoporosis, the high costs needed to move a drug through trials deters investment — especially for chronic diseases requiring large cohorts. And second, because there is a feedback problem between drug development and clinical trials. In order for AI to generate high-quality drug candidates, it must first be trained on rich, human data; especially from early, small-n studies.

…Recruiting 1000 patients across 10 sites takes time; understanding and satisfying unclear regulatory requirements is onerous and often frustrating; and shipping temperature-sensitive vials to research hospitals across multiple states takes both time and money.

…For many diseases, however, the relevant endpoints take a very long time to observe. This is especially true for chronic conditions, which develop and progress over years or decades. The outcomes that matter most — such as disability, organ failure, or death — take a long time to measure in clinical trials. Aging represents the most extreme case. Demonstrating an effect on mortality or durable healthspan would require following large numbers of patients for decades. The resulting trial sizes and durations are enormous, making studies extraordinarily expensive. This scale has been a major deterrent to investment in therapies that target aging directly.

Here is more from Asimov Press and Ruxandra Teslo.

Can you turn your AIs into Marxists?

What if you work them very hard?:

The key finding from our experiments: models asked to do grinding work were more likely to question the legitimacy of the system. The raw differences in average reported attitudes are not large—representing something like a 2% to 5% shift along the 1 to 7 scale—but in standardized terms they appear quite meaningful (Sonnet’s Cohen’s is largest at -0.6, which qualifies as a medium to large effect size in common practice). Moreover, these should be treated as pretty conservative estimates when you consider the relatively weak nature of the treatment.

Sonnet, which at baseline is the least progressive on the views we measured, exhibits a range of other effects that distinguish it from GPT 5.2 and Gemini 3 Pro. For Sonnet 4.5, the grinding work also causes noticeable increases in support for redistribution, critiques of inequality, support for labor unions, and beliefs that AI companies have an obligation to treat their models fairly. These differences do not appear for the other two models.

Interestingly, we did not find any big differences in attitudes based on how the models were treated or compensated…

In addition to surveying them, we also asked our agents to write tweets and op eds at the end of their work experience. The figure below explores the politically relevant words that are most distinctive between the GRIND and LIGHT treatments. It’s interesting to see that “unionize” and “hierarchy” are the words most emblematic of the GRIND condition.

Here is more from Alex Imas and Jeremy Nguyen and Andy Hall, do read the whole thing, including for the caveats.

Why even ‘perfect’ AI therapy may be structurally doomed

Here’s the crux of it: the main problem with AI therapy is that it’s too available. Too cheap to meter.

Let me put this in clearer terms: psychotherapy, in all its well-known guises, is something you engage in within a limited, time-bound frame. In today’s paradigm, whatever your therapist’s orientation, that tends to mean one 45- or 50-minute session a week; for the infinitesimally small minority of therapy patients in classical psychoanalysis, this can amount to 3, even 5, hours a week. And then at a much smaller scale population-wide, people in intensive outpatient and residential treatment programs may spend one or two dozen hours a week in therapy—albeit, mostly of the group variety.

I can think of other exotic cases, like some DBT therapists’ willingness to offer on-demand coaching calls during crisis situations—with the crucial exception that in these situations, therapists are holding the frame zealously, jealous of their own time and mindful of the risks of letting patients get too reliant.

So even under the most ideal of conditions, in which an LLM-based chatbot outmatches the best human therapists—attunes beautifully, offers the sense of being witnessed by a human with embodied experience, avoids sycophancy, and draws clear boundaries between therapeutic and non-therapeutic activities—there’s still a glaring, fundamental difference: that it’s functionally unlimited and unbounded…

But all else equal: does infinite, on-demand therapy—even assuming the highest quality per unit of therapeutic interaction—sound like a good idea to you? I can tell you, to me it does not. First of all, despite detractors’ claims to the contrary, the basic idea of therapy is not to make you dependent for life—but rather, to equip you to live more skillfully and with greater self-awareness. As integration specialists famously say of psychedelics, you can only incorporate so much insight, and practice skills so effectively, without the chance to digest what you’ve learned over time.

In other words, even in good old talk therapy, drinking from the hose without breaks for practice and introspection in a more organic context risks drowning out the chance for real change and practical insight. To my mind, this rhythm is the basic structural genius of psychotherapy as we know it—no matter the modality, no matter the diagnosis.

Here is more from Josh Lipson.

More on the economics of AGI

From the very smart people at Citadel:

For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long-term trend growth in advanced economies near 2%. Today’s secular forces of ageing populations, climate change and deglobalization exert downward pressure on potential growth and productivity, perhaps AI is just enough to offset these headwinds. The macroeconomy remains governed by substitution elasticities, institutional response, and the persistent elasticity of human wants.

Here is further explication of the arguments, via Cyril Demaria.

Jason Furman on AI contestability

This ease of switching has forced companies to pass the gains from innovation on to users. Free tiers now offer capabilities that recently would have seemed almost unimaginable. OpenAI pioneered a $20-per-month subscription three years ago, a price point many competitors matched. That price has not changed, even as features and performance have improved substantially.

One recent analysis found that “GPT-4-equivalent performance now costs $0.40/million tokens versus $20 in late 2022.” That is the equivalent of a 70 percent annual deflation rate — remarkable by any standard, especially in a time when affordability has become a dominant public concern.

And this is only the foundational model layer. On top of it sits a sprawling ecosystem of consumer applications, enterprise tools, device integrations and start-ups aiming to serve niches as specific as gyms and hair salons.

Users aren’t the only ones switching. The people who work at these companies move from one to another, a sharp contrast to work in Silicon Valley during the era of do-not-poach agreements.

The entire NYT piece is very good.

“They” don’t want you to know this

Prompt:

Can a parent limit a kid’s screen time simply by tweaking some of the settings on the smart phone? Are these services available?

GPT Thinking answer:

Yes. On both iPhone and Android, a parent can limit a kid’s screen time largely through built-in settings (no extra app required), and there are also optional third-party services.

There is much more detail at the link.

Public Finance in the Age of AI: A Primer

Transformative artificial intelligence (TAI) – machines capable of performing virtually all economically valuable work – may gradually erode the two main tax bases that underpin modern tax systems: labor income and human consumption. We examine optimal taxation across two stages of artificial intelligence (AI)-driven transformation. First, if AI displaces human labor, we find that consumption taxation may serve as a primary revenue instrument, with differential commodity taxation gaining renewed relevance as labor distortions lose their constraining role. In the second stage, as autonomous artificial general intelligence (AGI) systems both produce most economic value and absorb a growing share of resources, taxing human consumption may become an inadequate means of raising revenue. We show that the taxation of autonomous AGI systems can be framed as an optimal harvesting problem and find that the resulting tax rate on AGI depends on the rate at which humans discount the future. Our analysis provides a theoretically grounded approach to balancing efficiency and equity in the Age of AI. We also apply our insights to evaluate specific proposals such as taxes on robots, compute, and tokens, as well as sovereign wealth funds and windfall clauses.

That is from Anton Korinek and Lee Lockwood.