Category: Web/Tech
Think through the situation one step further
Many of you got upset when I mentioned the possibility that parents use smart phone software to control the social media usage of their kids. There was an outcry about how badly those systems work (is that endogenous?). But that is missing the point.
If you wish to limit social media usage, mandate that the phone companies install such software and make it more effective. Or better yet commission or produce a public sector app to do the same, a “public option” so to speak. Parents can then download such an app on the phone of their children, or purchase the phone with the app, and manipulate it as they see fit.
If you do not think government is capable of doing that, why think they are capable of running an effective ban for users under the age of sixteen? Maybe those apps can be hacked but we all know the “no fifteen year olds” solution can be hacked too, for instance by VPNs or by having older friends set up the account.
My proposal has several big advantages:
1. It keeps social media policy in the hands of the parents and away from the government.
2. It does not run the risk of requiring age verification for all users, thus possibly banishing anonymous writing from the internet.
3. The government does not have to decide what constitutes a “social media site.”
Just have the government commission a software app that can give parents the control they really might want to have. I am not myself convinced by the market failure charges here, but I am very willing to allow a public option to enter the market.
The fact that this option occasions so little interest from the banners I find highly indicative.
AI Won’t Automatically Accelerate Clinical Trials
Although I’m optimistic that AI will design better drug candidates, this alone cannot ensure “therapeutic abundance,” for a few reasons. First, because the history of drug development shows that even when strong preclinical models exist for a condition, like osteoporosis, the high costs needed to move a drug through trials deters investment — especially for chronic diseases requiring large cohorts. And second, because there is a feedback problem between drug development and clinical trials. In order for AI to generate high-quality drug candidates, it must first be trained on rich, human data; especially from early, small-n studies.
…Recruiting 1000 patients across 10 sites takes time; understanding and satisfying unclear regulatory requirements is onerous and often frustrating; and shipping temperature-sensitive vials to research hospitals across multiple states takes both time and money.
…For many diseases, however, the relevant endpoints take a very long time to observe. This is especially true for chronic conditions, which develop and progress over years or decades. The outcomes that matter most — such as disability, organ failure, or death — take a long time to measure in clinical trials. Aging represents the most extreme case. Demonstrating an effect on mortality or durable healthspan would require following large numbers of patients for decades. The resulting trial sizes and durations are enormous, making studies extraordinarily expensive. This scale has been a major deterrent to investment in therapies that target aging directly.
Can you turn your AIs into Marxists?
What if you work them very hard?:
The key finding from our experiments: models asked to do grinding work were more likely to question the legitimacy of the system. The raw differences in average reported attitudes are not large—representing something like a 2% to 5% shift along the 1 to 7 scale—but in standardized terms they appear quite meaningful (Sonnet’s Cohen’s d is largest at -0.6, which qualifies as a medium to large effect size in common practice). Moreover, these should be treated as pretty conservative estimates when you consider the relatively weak nature of the treatment.
Sonnet, which at baseline is the least progressive on the views we measured, exhibits a range of other effects that distinguish it from GPT 5.2 and Gemini 3 Pro. For Sonnet 4.5, the grinding work also causes noticeable increases in support for redistribution, critiques of inequality, support for labor unions, and beliefs that AI companies have an obligation to treat their models fairly. These differences do not appear for the other two models.
Interestingly, we did not find any big differences in attitudes based on how the models were treated or compensated…
In addition to surveying them, we also asked our agents to write tweets and op eds at the end of their work experience. The figure below explores the politically relevant words that are most distinctive between the GRIND and LIGHT treatments. It’s interesting to see that “unionize” and “hierarchy” are the words most emblematic of the GRIND condition.
Here is more from Alex Imas and Jeremy Nguyen and Andy Hall, do read the whole thing, including for the caveats.
Why even ‘perfect’ AI therapy may be structurally doomed
Here’s the crux of it: the main problem with AI therapy is that it’s too available. Too cheap to meter.
Let me put this in clearer terms: psychotherapy, in all its well-known guises, is something you engage in within a limited, time-bound frame. In today’s paradigm, whatever your therapist’s orientation, that tends to mean one 45- or 50-minute session a week; for the infinitesimally small minority of therapy patients in classical psychoanalysis, this can amount to 3, even 5, hours a week. And then at a much smaller scale population-wide, people in intensive outpatient and residential treatment programs may spend one or two dozen hours a week in therapy—albeit, mostly of the group variety.
I can think of other exotic cases, like some DBT therapists’ willingness to offer on-demand coaching calls during crisis situations—with the crucial exception that in these situations, therapists are holding the frame zealously, jealous of their own time and mindful of the risks of letting patients get too reliant.
So even under the most ideal of conditions, in which an LLM-based chatbot outmatches the best human therapists—attunes beautifully, offers the sense of being witnessed by a human with embodied experience, avoids sycophancy, and draws clear boundaries between therapeutic and non-therapeutic activities—there’s still a glaring, fundamental difference: that it’s functionally unlimited and unbounded…
But all else equal: does infinite, on-demand therapy—even assuming the highest quality per unit of therapeutic interaction—sound like a good idea to you? I can tell you, to me it does not. First of all, despite detractors’ claims to the contrary, the basic idea of therapy is not to make you dependent for life—but rather, to equip you to live more skillfully and with greater self-awareness. As integration specialists famously say of psychedelics, you can only incorporate so much insight, and practice skills so effectively, without the chance to digest what you’ve learned over time.
In other words, even in good old talk therapy, drinking from the hose without breaks for practice and introspection in a more organic context risks drowning out the chance for real change and practical insight. To my mind, this rhythm is the basic structural genius of psychotherapy as we know it—no matter the modality, no matter the diagnosis.
Here is more from Josh Lipson.
More on the economics of AGI
From the very smart people at Citadel:
For AI to produce a sustained negative demand shock, the economy must see a material acceleration in adoption, experience near-total labor substitution, no fiscal response, negligible investment absorption, and unconstrained scaling of compute. It is also worth recalling that over the past century, successive waves of technological change have not produced runaway exponential growth, nor have they rendered labor obsolete. Instead, they have been just sufficient to keep long-term trend growth in advanced economies near 2%. Today’s secular forces of ageing populations, climate change and deglobalization exert downward pressure on potential growth and productivity, perhaps AI is just enough to offset these headwinds. The macroeconomy remains governed by substitution elasticities, institutional response, and the persistent elasticity of human wants.
Here is further explication of the arguments, via Cyril Demaria.
Jason Furman on AI contestability
This ease of switching has forced companies to pass the gains from innovation on to users. Free tiers now offer capabilities that recently would have seemed almost unimaginable. OpenAI pioneered a $20-per-month subscription three years ago, a price point many competitors matched. That price has not changed, even as features and performance have improved substantially.
One recent analysis found that “GPT-4-equivalent performance now costs $0.40/million tokens versus $20 in late 2022.” That is the equivalent of a 70 percent annual deflation rate — remarkable by any standard, especially in a time when affordability has become a dominant public concern.
And this is only the foundational model layer. On top of it sits a sprawling ecosystem of consumer applications, enterprise tools, device integrations and start-ups aiming to serve niches as specific as gyms and hair salons.
Users aren’t the only ones switching. The people who work at these companies move from one to another, a sharp contrast to work in Silicon Valley during the era of do-not-poach agreements.
The entire NYT piece is very good.
“They” don’t want you to know this
Prompt:
Can a parent limit a kid’s screen time simply by tweaking some of the settings on the smart phone? Are these services available?
GPT Thinking answer:
Yes. On both iPhone and Android, a parent can limit a kid’s screen time largely through built-in settings (no extra app required), and there are also optional third-party services.
There is much more detail at the link.
Public Finance in the Age of AI: A Primer
Transformative artificial intelligence (TAI) – machines capable of performing virtually all economically valuable work – may gradually erode the two main tax bases that underpin modern tax systems: labor income and human consumption. We examine optimal taxation across two stages of artificial intelligence (AI)-driven transformation. First, if AI displaces human labor, we find that consumption taxation may serve as a primary revenue instrument, with differential commodity taxation gaining renewed relevance as labor distortions lose their constraining role. In the second stage, as autonomous artificial general intelligence (AGI) systems both produce most economic value and absorb a growing share of resources, taxing human consumption may become an inadequate means of raising revenue. We show that the taxation of autonomous AGI systems can be framed as an optimal harvesting problem and find that the resulting tax rate on AGI depends on the rate at which humans discount the future. Our analysis provides a theoretically grounded approach to balancing efficiency and equity in the Age of AI. We also apply our insights to evaluate specific proposals such as taxes on robots, compute, and tokens, as well as sovereign wealth funds and windfall clauses.
That is from Anton Korinek and Lee Lockwood.
Is there an aggregate demand problem in an AGI world?
No. Let’s say AI is improving very rapidly, and affecting the world more rapidly and more radically than I think is plausible. Let’s just say.
All of a sudden there are incredible things you can spend your money on.
Since there is (possibly) radical deflation, you might be tempted to just hold all your money and buy nothing. Pick vegetables from your garden. But the high marginal utility of the new goods and services will get you to spend, especially since you know that plenitude will bring you, in relative terms, a lower marginal utility for marginal expenditures in the future.
You might even go crazy spending. If nothing else, buy new and improved vegetable seeds for your garden. That same example shows that spending is robust to you losing your job, even assuming no reemployment is possible. In this world, there are significant Pigou effects on wealth.
Fed policy has no problem mattering in this world. Other people of course will wish to use the new Fed-sprayed liquidity to invest. They might even invest in AI-related goods and services, not all of which will be controlled by “billionaires.”
Liquidity trap arguments, if they are to work at all, require a pretty miserable environment for investment and also consumption.
Note by the way, that liquidity traps were supposed to apply to currency only! If you try to apply the concept to money more generally, when most forms of holding money bear interest rates of return, the whole concept collapses.
So there is not an aggregate demand problem in this economy, even if the social situation feels volatile or uncomfortable. After that, Say’s Law holds. If AI produces a lot more stuff, income is generated from that and the economy keeps going, whether or not the resulting distribution pleases your sense of morality. Along the way, prices adjust as need be. If unemployment rises significantly, prices fall too, all the more. I am not saying everyone ends up happy here, but you cannot have a) a flood of goods and services, b) billions accruing to the AI owners, without also c) prices are at a level where most people can afford to buy a whole bunch of things. Otherwise, where do you think all the AI revenue is coming from? The new output has to go somewhere, and sorry people it is simply not all trapped in currency hoards. Be just a little Walrasian here, please. (I would call it Huttian instead.)
Besides, why assume that “the machines” here are reaping all the surplus? Are they the scarce factor of production? Maybe it is hard to say in advance, but do not take any particular assumptions for granted here, ask to see them spelt out. One simple scenario is that the regions with energy and data centres become much wealthier, and people need to move to those areas. Maybe they do not do this quickly enough, a’la our earlier history with the Rust Belt. That is a problem worth worrying about, but it is nothing like the recent collapse concerns that have been circulating.
The whole Citrini scenario is incorrect right off the bat. Very little of it is based on sound macroeconomic reasoning. See Eli’s very good comments too. Nicholas also. Dare I say they should have consulted with the AIs for a bit longer?
Daniel Litt on AI and Math
Daniel Litt is a professor of mathematics at the University of Toronto. He has been active in evaluating AI models for many years and is generally seen as a skeptic pushing back at hype. He has a very interesting statement updating his thoughts:
In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.
Much of what I’ll say here is not factually very different from what I’ve written before. I’ve slowly updated my timelines over the past year, but if one wants to speculate about the long-term future of math research, a difference of a few years is not so important. My trigger for writing this post is that, despite all of the above, I think I was not correctly calibrated as to the capabilities of existing models, let alone near-future models. This was more apparent in the mood of my comments than their content, which was largely cautious.
To be sure, the models are not yet as original or creative as the very best human mathematicians (who is?) but:
Can an LLM invent the notion of a scheme, or of a perfectoid space, or whatever your favorite mathematical object is? (Could I? Could you? Obviously this is a high bar, and not necessary for usefulness.) Can it come up with a new technique? Execute an argument that isn’t “routine for the right expert”? Make an interesting new definition? Ask the right question?
…I am skeptical that there is any mystical aspect of mathematics research intrinsically inaccessible to models, but it is true that human mathematics research relies on discovering analogies and philosophies, and performing other non-rigorous tasks where model performance is as yet unclear.
Podcast with Jake Sullivan and Jon Finer
Mostly about geopolitics, plenty of fresh content. And here is the transcript. Excerpt:
Jon Finer:
Should the United States be willing to take military action to defend Taiwan? It’s a thorny question for politicians to answer, but we’d be interested in your view.
Tyler Cowen:
Well, this is what economists would call a mixed strategy. Ex-ante, we should have strategic ambiguity, and not just say, we’re not going to defend Taiwan. And when Joe Biden said, “Well, we are going to defend Taiwan,” I was quite happy.
Jon Finer:
Four times. Four times.
Tyler Cowen:
Four times, yes. I know there’s different versions of how it was talked back and the like, but it should be unclear. That said, when push comes to shove, if China has made its move, you have to look at what are the terms of the deal? What are they going to do with TSMC to our best knowledge? What’s the domestic quality chip production in the United States? How do we feel about Japan and maybe South Korea getting nuclear weapons? Can South Korea remain an autonomous nation? Those are a lot of balls to juggle and they’re all hard to judge at this moment. But I think ex-ante, we should definitely create some risk that we will go to war over Taiwan, but then make the best decision ex-post. But China knows that too, right? They’re not fools. They’ve studied game theory.
Jake Sullivan:
Tyler, I’m going to put you down as that being Tyler Cowen’s version of strategic ambiguity.
Tyler Cowen:
It may not be that different from your version.
Jake Sullivan:
Exactly.
Recommended, and I also talk about my secret, unpublished China book, still pending at Tsinghua, almost certainly forever. And we cover UAPs and curling as well.
GPT as a Measurement Tool
We present the GABRIEL software package, which uses GPT to quantify attributes in qualitative data (e.g. how “pro innovation” a speech is). GPT is evaluated on classification and attribute rating performance against 1000+ human annotated tasks across a range of topics and data. We find that GPT as a measurement tool is accurate across domains and generally indistinguishable from human evaluators. Our evidence indicates that labeling results do not depend on the exact prompting strategy used, and that GPT is not relying on training data contamination or inferring attributes from other attributes. We showcase the possibilities of GABRIEL by quantifying novel and granular trends in Congressional remarks, social media toxicity, and county-level school curricula. We then apply GABRIEL to study the history of tech adoption, using it to assemble a novel dataset of 37,000 technologies. Our analysis documents a tenfold decline of time lags from invention to adoption over the industrial age, from ~50 years to ~5 years today. We quantify the increasing dominance of companies and the U.S. in innovation, alongside characteristics that explain whether a technology will be adopted slowly or speedily.
That is from a new NBER working paper by .
India AI Data MCP
The Government of India’s Ministry of Statistics and Program Implementation has created an impressive Model Context Protocol (MCP) to connect AI’s to Indian datasets. An AI connected to data via an MCP essentially knows the entire codebook and can make use of the data like an expert. Once connected one can query the data in natural language and quickly create graphs and statistical analysis. I connected Claude to the MCP and created an elegant dashboard with data from India’s Annual Survey of Industries. Check it out.
The mainstream view
Multiple studies have either shown that smartphone and social media use among teens has minimal effects on their mental health or none at all. As a 2024 review published by an American Psychological Association journal put it: “There is no evidence that time spent on social media is correlated with adolescent mental health problems.”
And this:
Advocates of bans compare social media to alcohol or tobacco, where the harms are indisputable and the benefits are minimal. But the internet, including social media, is more analogous to books, magazines or television. I may not want my sons watching “The Texas Chain Saw Massacre” or reading “Fifty Shades of Grey,” but it would be crazy to ban books and films for kids altogether.
But that is the nature of these social media bans. Australia’s law not only restricted access to platforms such as Instagram and TikTok but also banned kids under 16 from having YouTube, X and Reddit accounts. Even Substack had to modify its practices.
Here is more from the excellent Sam Bowman. And many teens make money through “digital side hustles,” in this day and age that is what a teenage job often means.
Liberal AI
Can AI be liberal? In what sense? One answer points to the liberal insistence on freedom of choice, understood as a product of the commitment to personal autonomy and individual dignity. Mill and Hayek are of course defining figures here, emphasizing the epistemic foundations for freedom of choice. “Choice Engines,” powered by AI and authorized or required by law, might promote liberal goals (and in the process, produce significant increases in human welfare). A key reason is that they can simultaneously (1) preserve autonomy, (2) respect dignity, and (3) help people to overcome inadequate information and behavioral biases, which can produce internalities, understood as costs that people impose on their future selves, and also externalities, understood as costs that people impose on others. Different consumers care about different things, of course, which is a reason to insist on a high degree of freedom of choice, even in the presence of internalities and externalities. AI-powered Choice Engines can respect that freedom, not least through personalization. Nonetheless, AI-powered Choice Engines might be enlisted by insufficiently informed or self-interested actors, who might exploit inadequate information or behavioral biases, and thus co5mpromise liberal goals. AI-powered Choice Engines might also be deceptive or manipulative, again compromising liberal goals, and legal safeguards are necessary to reduce the relevant risks. Illiberal or antiliberal AI is not merely imaginable; it is in place. Still, liberal AI is not an oxymoron. It could make life less nasty, less brutish, less short, and less hard – and more free.