Category: Web/Tech

Jason Furman on AI contestability

This ease of switching has forced companies to pass the gains from innovation on to users. Free tiers now offer capabilities that recently would have seemed almost unimaginable. OpenAI pioneered a $20-per-month subscription three years ago, a price point many competitors matched. That price has not changed, even as features and performance have improved substantially.

One recent analysis found that “GPT-4-equivalent performance now costs $0.40/million tokens versus $20 in late 2022.” That is the equivalent of a 70 percent annual deflation rate — remarkable by any standard, especially in a time when affordability has become a dominant public concern.

And this is only the foundational model layer. On top of it sits a sprawling ecosystem of consumer applications, enterprise tools, device integrations and start-ups aiming to serve niches as specific as gyms and hair salons.

Users aren’t the only ones switching. The people who work at these companies move from one to another, a sharp contrast to work in Silicon Valley during the era of do-not-poach agreements.

The entire NYT piece is very good.

“They” don’t want you to know this

Prompt:

Can a parent limit a kid’s screen time simply by tweaking some of the settings on the smart phone? Are these services available?

GPT Thinking answer:

Yes. On both iPhone and Android, a parent can limit a kid’s screen time largely through built-in settings (no extra app required), and there are also optional third-party services.

There is much more detail at the link.

Public Finance in the Age of AI: A Primer

Transformative artificial intelligence (TAI) – machines capable of performing virtually all economically valuable work – may gradually erode the two main tax bases that underpin modern tax systems: labor income and human consumption. We examine optimal taxation across two stages of artificial intelligence (AI)-driven transformation. First, if AI displaces human labor, we find that consumption taxation may serve as a primary revenue instrument, with differential commodity taxation gaining renewed relevance as labor distortions lose their constraining role. In the second stage, as autonomous artificial general intelligence (AGI) systems both produce most economic value and absorb a growing share of resources, taxing human consumption may become an inadequate means of raising revenue. We show that the taxation of autonomous AGI systems can be framed as an optimal harvesting problem and find that the resulting tax rate on AGI depends on the rate at which humans discount the future. Our analysis provides a theoretically grounded approach to balancing efficiency and equity in the Age of AI. We also apply our insights to evaluate specific proposals such as taxes on robots, compute, and tokens, as well as sovereign wealth funds and windfall clauses.

That is from Anton Korinek and Lee Lockwood.

Is there an aggregate demand problem in an AGI world?

No.  Let’s say AI is improving very rapidly, and affecting the world more rapidly and more radically than I think is plausible.  Let’s just say.

All of a sudden there are incredible things you can spend your money on.

Since there is (possibly) radical deflation, you might be tempted to just hold all your money and buy nothing.  Pick vegetables from your garden.  But the high marginal utility of the new goods and services will get you to spend, especially since you know that plenitude will bring you, in relative terms, a lower marginal utility for marginal expenditures in the future.

You might even go crazy spending.  If nothing else, buy new and improved vegetable seeds for your garden.  That same example shows that spending is robust to you losing your job, even assuming no reemployment is possible.  In this world, there are significant Pigou effects on wealth.

Fed policy has no problem mattering in this world.  Other people of course will wish to use the new Fed-sprayed liquidity to invest.  They might even invest in AI-related goods and services, not all of which will be controlled by “billionaires.”

Liquidity trap arguments, if they are to work at all, require a pretty miserable environment for investment and also consumption.

Note by the way, that liquidity traps were supposed to apply to currency only!  If you try to apply the concept to money more generally, when most forms of holding money bear interest rates of return, the whole concept collapses.

So there is not an aggregate demand problem in this economy, even if the social situation feels volatile or uncomfortable.  After that, Say’s Law holds.  If AI produces a lot more stuff, income is generated from that and the economy keeps going, whether or not the resulting distribution pleases your sense of morality.  Along the way, prices adjust as need be.  If unemployment rises significantly, prices fall too, all the more.  I am not saying everyone ends up happy here, but you cannot have a) a flood of goods and services, b) billions accruing to the AI owners, without also c) prices are at a level where most people can afford to buy a whole bunch of things.  Otherwise, where do you think all the AI revenue is coming from?  The new output has to go somewhere, and sorry people it is simply not all trapped in currency hoards.  Be just a little Walrasian here, please.  (I would call it Huttian instead.)

Besides, why assume that “the machines” here are reaping all the surplus?  Are they the scarce factor of production?  Maybe it is hard to say in advance, but do not take any particular assumptions for granted here, ask to see them spelt out.  One simple scenario is that the regions with energy and data centres become much wealthier, and people need to move to those areas.  Maybe they do not do this quickly enough, a’la our earlier history with the Rust Belt.  That is a problem worth worrying about, but it is nothing like the recent collapse concerns that have been circulating.

The whole Citrini scenario is incorrect right off the bat.  Very little of it is based on sound macroeconomic reasoning.  See Eli’s very good comments too.  Nicholas also.  Dare I say they should have consulted with the AIs for a bit longer?

Daniel Litt on AI and Math

Daniel Litt is a professor of mathematics at the University of Toronto. He has been active in evaluating AI models for many years and is generally seen as a skeptic pushing back at hype. He has a very interesting statement updating his thoughts:

In March 2025 I made a bet with Tamay Besiroglu, cofounder of RL environment company Mechanize, that AI tools would not be able to autonomously produce papers I judge to be at a level comparable to that of the best few papers published in 2025, at comparable cost to human experts, by 2030. I gave him 3:1 odds at the time; I now expect to lose this bet.

Much of what I’ll say here is not factually very different from what I’ve written before. I’ve slowly updated my timelines over the past year, but if one wants to speculate about the long-term future of math research, a difference of a few years is not so important. My trigger for writing this post is that, despite all of the above, I think I was not correctly calibrated as to the capabilities of existing models, let alone near-future models. This was more apparent in the mood of my comments than their content, which was largely cautious.

To be sure, the models are not yet as original or creative as the very best human mathematicians (who is?) but:

Can an LLM invent the notion of a scheme, or of a perfectoid space, or whatever your favorite mathematical object is? (Could I? Could you? Obviously this is a high bar, and not necessary for usefulness.) Can it come up with a new technique? Execute an argument that isn’t “routine for the right expert”? Make an interesting new definition? Ask the right question?

…I am skeptical that there is any mystical aspect of mathematics research intrinsically inaccessible to models, but it is true that human mathematics research relies on discovering analogies and philosophies, and performing other non-rigorous tasks where model performance is as yet unclear.

Podcast with Jake Sullivan and Jon Finer

Mostly about geopolitics, plenty of fresh content.  And here is the transcript.  Excerpt:

Jon Finer:

Should the United States be willing to take military action to defend Taiwan? It’s a thorny question for politicians to answer, but we’d be interested in your view.

Tyler Cowen:

Well, this is what economists would call a mixed strategy. Ex-ante, we should have strategic ambiguity, and not just say, we’re not going to defend Taiwan. And when Joe Biden said, “Well, we are going to defend Taiwan,” I was quite happy.

Jon Finer:

Four times. Four times.

Tyler Cowen:

Four times, yes. I know there’s different versions of how it was talked back and the like, but it should be unclear. That said, when push comes to shove, if China has made its move, you have to look at what are the terms of the deal? What are they going to do with TSMC to our best knowledge? What’s the domestic quality chip production in the United States? How do we feel about Japan and maybe South Korea getting nuclear weapons? Can South Korea remain an autonomous nation? Those are a lot of balls to juggle and they’re all hard to judge at this moment. But I think ex-ante, we should definitely create some risk that we will go to war over Taiwan, but then make the best decision ex-post. But China knows that too, right? They’re not fools. They’ve studied game theory.

Jake Sullivan:

Tyler, I’m going to put you down as that being Tyler Cowen’s version of strategic ambiguity.

Tyler Cowen:

It may not be that different from your version.

Jake Sullivan:

Exactly.

Recommended, and I also talk about my secret, unpublished China book, still pending at Tsinghua, almost certainly forever.  And we cover UAPs and curling as well.

GPT as a Measurement Tool

We present the GABRIEL software package, which uses GPT to quantify attributes in qualitative data (e.g. how “pro innovation” a speech is). GPT is evaluated on classification and attribute rating performance against 1000+ human annotated tasks across a range of topics and data. We find that GPT as a measurement tool is accurate across domains and generally indistinguishable from human evaluators. Our evidence indicates that labeling results do not depend on the exact prompting strategy used, and that GPT is not relying on training data contamination or inferring attributes from other attributes. We showcase the possibilities of GABRIEL by quantifying novel and granular trends in Congressional remarks, social media toxicity, and county-level school curricula. We then apply GABRIEL to study the history of tech adoption, using it to assemble a novel dataset of 37,000 technologies. Our analysis documents a tenfold decline of time lags from invention to adoption over the industrial age, from ~50 years to ~5 years today. We quantify the increasing dominance of companies and the U.S. in innovation, alongside characteristics that explain whether a technology will be adopted slowly or speedily.

That is from a new NBER working paper by Hemanth AsirvathamElliott Mokski Andrei Shleifer.

India AI Data MCP

The Government of India’s Ministry of Statistics and Program Implementation has created an impressive Model Context Protocol (MCP) to connect AI’s to Indian datasets. An AI connected to data via an MCP essentially knows the entire codebook and can make use of the data like an expert. Once connected one can query the data in natural language and quickly create graphs and statistical analysis. I connected Claude to the MCP and created an elegant dashboard with data from India’s Annual Survey of Industries. Check it out.

The mainstream view

Multiple studies have either shown that smartphone and social media use among teens has minimal effects on their mental health or none at all. As a 2024 review published by an American Psychological Association journal put it: “There is no evidence that time spent on social media is correlated with adolescent mental health problems.”

And this:

Advocates of bans compare social media to alcohol or tobacco, where the harms are indisputable and the benefits are minimal. But the internet, including social media, is more analogous to books, magazines or television. I may not want my sons watching “The Texas Chain Saw Massacre” or reading “Fifty Shades of Grey,” but it would be crazy to ban books and films for kids altogether.

But that is the nature of these social media bans. Australia’s law not only restricted access to platforms such as Instagram and TikTok but also banned kids under 16 from having YouTube, X and Reddit accounts. Even Substack had to modify its practices.

Here is more from the excellent Sam Bowman.  And many teens make money through “digital side hustles,” in this day and age that is what a teenage job often means.

Liberal AI

Can AI be liberal? In what sense? One answer points to the liberal insistence on freedom of choice, understood as a product of the commitment to personal autonomy and individual dignity. Mill and Hayek are of course defining figures here, emphasizing the epistemic foundations for freedom of choice. “Choice Engines,” powered by AI and authorized or required by law, might promote liberal goals (and in the process, produce significant increases in human welfare). A key reason is that they can simultaneously (1) preserve autonomy, (2) respect dignity, and (3) help people to overcome inadequate information and behavioral biases, which can produce internalities, understood as costs that people impose on their future selves, and also externalities, understood as costs that people impose on others. Different consumers care about different things, of course, which is a reason to insist on a high degree of freedom of choice, even in the presence of internalities and externalities. AI-powered Choice Engines can respect that freedom, not least through personalization. Nonetheless, AI-powered Choice Engines might be enlisted by insufficiently informed or self-interested actors, who might exploit inadequate information or behavioral biases, and thus co5mpromise liberal goals. AI-powered Choice Engines might also be deceptive or manipulative, again compromising liberal goals, and legal safeguards are necessary to reduce the relevant risks. Illiberal or antiliberal AI is not merely imaginable; it is in place. Still, liberal AI is not an oxymoron. It could make life less nasty, less brutish, less short, and less hard – and more free.

By Cass Sunstein.

Science should be machine-readable

One of the leading tasks of our time:

We develop a machine-automated approach for extracting results from papers, which we assess via a comprehensive review of the entire eLife corpus. Our method facilitates a direct comparison of machine and peer review, and sheds light on key challenges that must be overcome in order to facilitate AI-assisted science. In particular, the results point the way towards a machine-readable framework for disseminating scientific information. We therefore argue that publication systems should optimize separately for the dissemination of data and results versus the conveying of novel ideas, and the former should be machine-readable.

Here is the paper by A. Sina Booeshagh, Laura Luebbert, and Lior Pachter.  Via John Tierney.

Rebuilding our world, with reference to strong AI

When 2012 passed into 2013, we did not have to rebuild our world, not in most countries at least.  It sufficed to make adjustments at the margin.

After the Roman Empire fell, parts of Europe had to rebuild their worlds.  It took a long time, but they ended up doing pretty well.

After the American Revolution, the newly independent colonies had to rebuild their own world.  They did so brutally, but with considerable success.

After WWII, Western Europe had the chance to rebuild its own world, and did a great job.

We moderns are not used to having to rebuild our world.

It is now the case that strong AI is here/coming, and we will have to rebuild our own world.  Many of us are terrified at this prospect, others are just extremely pessimistic.  It seems so impossible.  How are all the new pieces supposed to fit together?  Who amongst us can explain that process in a reassuring way?

Yet we have done it many times before.  Not always with success, however.  After WWI ended, Europe was supposed to rebuild its own world, but they came up with something far worse than what they had before.  Nonetheless, in the broader sweep of history world rebuilding projects have had positive expected value.

And so we will rebuilding our world yet again.  Or maybe you think we are simply incapable of that.

As this happens, it can be useful to distinguish “criticisms of AI” from “people who cannot imagine that world rebuilding will go well.”  A lot of what parades as the former is actually the latter.

In any case, it all will be quite something to witness.

India’s AI wedding buffet

Shruti Rajagopalan surveys much of the AI policy debate in India.  Excerpt:

If there is a single domain where India’s AI ambitions will succeed or fail, it is energy. And energy in India is not a technology problem. It is a political economy problem, arguably the most intractable one the country faces.

India’s peak electricity demand hit 250 GW in May 2024, up from 143 GW a decade earlier. The IEA forecasts 6.3 percent annual growth through 2027, faster than any major economy. Cooling demand alone could reach 140 GW of peak load by 2030. One number captures the trajectory. For each incremental degree in daily average temperature, peak demand now rises by more than 7 GW. In 2019 the figure was half that. India is getting hotter, richer, and more electricity-hungry simultaneously.

State-controlled distribution companies have accumulated $83.7 billion in debt because energy prices have been politically distorted for decades. Over 50 GW of renewable capacity sits underutilized. About 60 GW is stranded behind inadequate transmission. The shortage is financial and infrastructural, not resource-based. Without reforming distribution pricing, governance, and grid investment ($50 billion estimated by 2035), new renewable capacity will not become reliable electricity. It will become another line item on a DISCOM balance sheet no one wants to read.

India’s electricity reaches consumers through 72 distribution companies, 44 of them state-owned, collectively the most financially distressed utilities in the world. Accumulated losses stood at ₹6.92 trillion ($76.89 billion) as of March 2024, rising every year despite five government bailouts since 2002.

Substantive throughout.

Natural and Artificial Ice

Excellent Veritasium video on the 19th century ice industry. Shipping ice from America to India would hardly seem like a wise idea—it’s hard to imagine ever getting a committee to approve such a venture—but entrepreneurs are free to try wacky ideas all the time, and sometimes they pay off, resulting in great riches. That’s the story of the “Ice King,” Frederic Tudor, who lost money for years before figuring out the insulation and logistics needed to make the trade profitable.

What I hadn’t fully appreciated is how the ice trade reshaped shipping, diet, and city design before the invention of mechanical refrigeration. Ice created the cold chain, and the cold chain made it possible to move fresh meat, fish, and produce over long distances. That in turn enabled cities to grow far beyond what local agriculture could support and shifted the American diet from salted and smoked provisions toward fresh food.

The profits of the ice trade encouraged investment in artificial ice which initially was met with resistance—natural ice is created by God!—a classic example of incumbents wrapping their economic interests in moral language, a pattern we see repeated with every disruptive technology from margarine to ridesharing.

Lots of lessons in the video about option value, permissionless innovation, and creative destruction. New technologies destroy old industries and create new ones that no one could have foreseen. The moral panic over artificial ice replacing the natural kind is no doubt familiar.

Hat tip: Naveen Nvn