Why I think AI take-off is relatively slow

I’ve already covered much of this in my podcast with Dwarkesh, but I thought it would be useful to write it all out in one place.  I’ll assume you already know the current state of the debate.  Here goes:

1. Due to the Baumol-Bowen cost disease, less productive sectors tend to become a larger share of the economy over time.  This already has been happening since the American economy originated.  A big chunk of current gdp already is slow to respond, highly inefficient, governmental or government-subsidized sectors.  They just won’t adopt AI, or use it effectively, all that quickly.  As I said to an AI guy a few days ago “The way I can convince you is to have you sit in on a Faculty Senate meeting.”  And the more effiicient AI becomes, the more this trend is likely to continue, which slows the prospective measured growth gains from AI.

2. Human bottlenecks become more important, the more productive is AI.  Let’s say AI increases the rate of good pharma ideas by 10x.  Well, until the FDA gets its act together, the relevant constraint is the rate of drug approval, not the rate of drug discovery.

2b. These do not have to be regulatory obstacles, though many are.  It may be slow adopters, people in the workplace who hate AI, energy constraints, and much more.  It simply is not the case that all workplace inputs are rising in lockstep, quite the contrary.

3. The O-Ring models makes AI hard to work with.  The O-Ring model stipulates that, in some settings, it is the worst performer who sets the overall level of productivity.  (In the NBA, for instance, it may be the quality of the worst defender on the floor, since the player your worst defender is supposed to guard can just keep taking open shots.)  Soon enough, at least in the settings where AI is supposed to shine, the worst performer will be the humans.  The AIs will make the humans somewhat better, but not that much better all that quickly.

This is a variant of #2, but in more extreme form.  A simple way to put it is that you are not smart enough to notice directly how much better o5 will be than o3.  For various complex computational tasks, not observed by humans, the more advanced model of course will be more effective.  But when it comes to working with humans, those extra smarts largely will be wasted.

3b. The human IQ-wages gradient is quite modest, suggesting that more IQ in the system does not raise productivity dramatically.  You might think that does not hold across the super-intelligent margin the machines will inhabit, but the O-Ring model suggests otherwise, apart from some specialized calculations where the machine does not need to collaborate with humans.

4. I don’t think the economics of AI are well-defined by either “an increase in labor supply,” “an increase in TFP,” or “an increase in capital,” though it is some of each of those.  It is more like “some Star Trek technology fell into our backyard, and how long will it take us to figure out how to integrate it with other things humans do?”  You can debate how long that will take, but the Solow model, the Romer model and their offshoots will not give us reliable answers.

5. There is a vast literature on the diffusion of new technologies (go ask DeepResearch!).  Historically it usually takes more time than the most sophisticated observers expect.  Electricity for instance arguably took up to forty years.  I do think AI will spread faster than that, but reading this literature will bring you back down to earth.

6. Historically, gdp growth is remarkably smooth, albeit for somewhat mysterious reasons.  North America is a vastly different place than it was in the year 1600, technologically and otherwise.  Yet there are remarkably few years when the economic growth rate is all that far from two percent.  There is a Great Depression, some years of higher growth, some stagnation, and a few major wars, but even in those cases we are not so far from two percent.  I do not pretend to be able to model this satisfactorily (though the above factors surely have relevance in non-AI settings too), but unless you have figured this puzzle out, do not be too confident in any prediction that is so very far from two percent.

7. I’ve gone on record as suggesting that AI will boost economic growth rates by half a percentage point a year.  That is very much a guess.  It does mean that, with compounding, the world is very different a few decades out.  It also means that year to year non-infovores will not necessarily notice huge changes in their environments.  So far I have not seen evidence to contradict that broad expectation.

8. Current prices are not forecasting any kind of very rapid transformation.  And yes market traders do know about AI, AGI, and the like.

9. None of these views are based on pessimism about the capabilities of AI models.  I hear various views, often from people working in the area, and on the tech per se I have an optimism that corresponds to the weighted average of what I hear.

Comments

Respond

Add Comment