Category: Web/Tech

Economists on AI and economic growth and employment

We completed the most comprehensive study of how economists and AI experts think AI will affect the U.S. economy. They predict major AI progress—but no dramatic break from economic trends: GDP growth rates similar to today’s and a moderate decline in labor force participation. However, when asked to consider what would happen in a world with extremely rapid progress in AI capabilities by 2030, they predict significant economic impacts by 2050:

• Annualized GDP growth of 3.5% (compared to 2.4% in 2025)

• A labor force participation rate of 55% (roughly 10 million fewer jobs)

• 80% of wealth held by the top 10% (highest since 1939)

That is from this very good and very detailed Twitter thread, worth reading in its entirety.  Note this:

Only 5.2% of the variance is between scenarios—attributable to disagreement about AI capabilities themselves…

Here is the full paper, over 200 pages long, I will be reading through it.  The list of authors is impressive, with Ezra Karger in the lead, also including Kevin Bryan, Basil Halperin, and many more.  For some while this will stand as the best set of estimates we have.  Here are the related forecasts of Seb Krier.

Is financial economics still economics?

That all sounded wonderful, and that core model and its offshoots dominated financial research for decades. The problem, however, was that it wasn’t true, or at least it wasn’t nearly as true as we had thought and hoped. When financial economists refined the models with more complete specifications, it turned out Beta didn’t predict stock returns much at all. Eugene Fama and Kenneth French delivered one of the final blows to earlier approaches with a 1992 paper that showed Beta didn’t have explanatory power over expected returns at all. Since Fama himself was one of the original architects of CAPM-like reasoning, and French also was a renowned finance economist, these revisions to the model were credible. For all its original promise, marginalism, and the concomitant notion of diminishing marginal utility, no longer seemed to help explain asset returns.

Under one plausible account of intellectual history, you can date the decline of marginalism to that 1992 paper. In the most rigorous, data-oriented, and highest-paying field of economics, namely finance, marginalist constructs had every chance to succeed. In fact, they ran the board for several decades. But over time they failed. In the most prestigious field of economics, marginalism has been in full retreat for over 30 years, and it shows no signs of making a comeback.

We already know that financial practice is dominated by the (non-economist) quants. But how about financial economics research, the parts that are still done by economists? What direction is that work moving in?

I was struck by a 2024 paper published in the Journal of Financial Economics, one of the two leading journals of financial economics (Journal of Finance is the other). The authors are Scott Murray, Yusen Xia, and Houping Xiao, and the title is “Charting by Machines.” The core result is pretty simple, and best expressed in the well-written abstract:

“We test the efficient market hypothesis by using machine learning to forecast stock returns from historical performance. These forecasts strongly predict the cross-section of future stock returns. The predictive power holds in most subperiods and is strong among the largest 500 stocks. The forecasting function has important nonlinearities and interactions, is remarkably stable through time, and captures effects distinct from momentum, reversal and extant technical signals. These findings question the efficient market hypothesis and indicate that technical analysis and charting have merit. We also demonstrate that machine learning models that perform well in optimization continue to perform well out-of-sample.” Murray, Xia, and Xiao (2024, p. 1). Or consider the new paper Borri, Chetverikov, Liu, and Tsyvinski (2024). They propose a new non-linear, single-factor asset pricing model. In the abstract: “Most known finance and macro factors become insignificant controlling for our single-factor.” Yet you won’t find traditional economic variables discussed in this paper, it is all about the math, in particular a representation of the Kolmogorov-Arnold representation theorem.

In other words, the successful approach to predicting returns is giving up on traditional portfolio theory and using the “theory-less” technique of machine learning. Although this is published in the Journal of Financial Economics, in some significant sense it is not economic reasoning at all. It is calculation, combined with expertise in math and computer science. The modeling is not economic modeling in a manner that has ties to marginalism or standard intuitive microeconomic theory. And the work is predicting excess returns in a pretty robust and successful way…

There is a recent working paper which is perhaps more striking yet, by Antoine Didisheim, Shikun (Barry) Ke, Bryan T. Kelly, and Semyon Malamud. They pick up from Arbitrage Pricing Theory (APT), a well-established idea from financial economics. APT typically looks for “factors” in the data which predict excess returns, and a traditional APT model might have found five or six such factors. Are “inflation” or perhaps “the term structure of interest rates” useful factors? Well, that can be debated, but if so, those results sound pretty intuitive. But those intuitions seem to be disappearing. In a paper by these authors, they apply machine learning methods to look for more factors. As we know, machine learning is very good at finding non-obvious relationships in the data. The largest model they built has 360,000 (!) factors, and it reduces pricing errors by 54.8 percent relative to the classic six-factor model from Fama and French. Bravo to the authors, but what kinds of intuitions do you think possibly can be supported by those 360,000 factors?

That is from my new The Marginal Revolution: Rise and Decline, and the Pending Revolution in AI.

A reminder (for academics)

Yes, there are skills AIs haven’t mastered. But if your skill still appears to be the exclusive province of humans, that might mean the major AI companies do not yet consider it very important to master right away. Eventually it will rise to the top of the list.

Here is more from my Free Press essay on AI.  If not for the copied passage, it seems no one was noticing this book review? (NYT, read the emendation)

Sentences to ponder

This matters for the AI question, and the book leaves it unfinished. If the breakthroughs of the past required social conditions, not just cognitive capacity, then what does it mean when the next breakthroughs are produced by systems that have no social conditions at all? A neural net does not need a university chair or financial independence from the church. It does not need to reorganize its commitments. It does not, in any recognizable sense, have commitments. The machine that replaces the marginalist is not a better marginalist. It is a different kind of thing entirely.

That is from Jônadas Techio, presumably with LLMs, this review of The Marginal Revolution is interesting throughout.  And this:

Maybe the book demonstrates only that Cowen personally remains good at something the field no longer needs.

*The AI Doc*

The subtitle of the movie is Or How I Became an Apocaloptimist, and here is the trailer.

Overall this film was better and smarter than I was expecting.  Intelligent people were allowed to speak, and to present various sides of the issue.  It was also interesting to see how various people one knows come across on the big screen.

It is easy enough to mock the final section of the movie, which calls for a participatory “civil rights” movement on AI, negotiations with China, and a big voice for trade unions in the decisions.  What Dan Klein calls “the people’s romance.”  The Straussian read there is correct, even though it probably was not intended by the moviemakers.  In reality, for better or worse, the final decisions will continue to be made by the national security establishment.

On a weekend, there were five other people in the theater.

Is Tinder actually OK?

Online dating apps have transformed the dating market, yet their broader effects remain unclear. We study Tinder’s impact on college students using its initial marketing focus on Greek organizations for identification. We show that the full-scale launch of Tinder led to a sharp, persistent increase in sexual activity, but with little corresponding impact on the formation of long-term relationships or relationship quality. Dating outcome inequality, especially among men, rose, alongside rates of sexual assault and STDs. However, despite these changes, Tinder’s introduction did not worsen students’ mental health on average and may have even led to improvements for female students.

That is from a new paper published in AEJ: Applied Economics, by Berkeren Büyükeren, Alexey Makarin, and Heyu Xiong.

A bilateral AI pause?

Dean ball has some thoughts and hesitations:

Here are some questions I wish “Pause” and “Stop” advocates would address:

1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one one party believes we have satisfied them but the other does not?

2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China?

3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders?

4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes?

5. Do the above questions mean that this really should be a global agreement, signed by all countries on Earth, or at least those with the theoretical ability to host large-scale data centers (probably Vanuatu doesn’t need to be on board)?

*The Marginal Revolution: Rise and Decline, and the Pending AI Revolution*

I am offering a new piece of work — I do not quite call it a book — online and free.  It has four chapters, is about 40,000 words, is fully written by me (not a word from the AIs), and it is attached to an AI with a dual page display, in this case Claude.  Think of it as a non-fiction novella of sorts, you can access it here.  You can read it on the screen, turn it into a pdf (and upload into your own AI), send it to your Kindle, or discuss it with Claude.

Here is the Table of Contents:

1. What Is Marginalism?

2. William Stanley Jevons, Builder and Destroyer of Marginalism

3. Why Did It Take So Long for the Science of Economics to Develop?

4. Why Marginalism Will Dwindle, and What Will Replace It?

Here are the first few paragraphs of the work:

How is it that ideas, and human capabilities, become lost? And how is that new insights come to pass? If eventually the insight seems obvious, why didn’t we see it before? Or maybe we did see it before, but didn’t really know we were on to something important? Why do new insights arrive suddenly, in a kind of flood? How do new worldviews replace older ones?

And what does all of that have to do with the future of science, the future of research, and the future of economics in particular? Especially when we try to understand how the ongoing artificial intelligence revolution is going to reshape human knowledge, and the all-important question of what economists should do.

Those are the motivating questions behind this work, but I will address them in what is initially an indirect fashion. I will start by considering a case study, namely the most important revolution in economics, the Marginal Revolution (to be defined shortly). The Marginal Revolution made modern economics possible. What was the Marginal Revolution? How did it start? Why did it take so very long to come to fruition? From those investigations we will get a sense of how economic ideas, and sometimes ideas more generally, develop. And that in turn will help us see where the science, art, and practice of economics is headed today.

Recommended!  I will be covering it more soon.

Solve for the China tech equilibrium

Authorities in Beijing have barred two executives from a Singapore-based AI firm from leaving China amid a review of the company’s $2 billion acquisition by U.S. social media giant Meta, according to a report by the Financial Times on Wednesday.

Xiao Hong and Ji Yichao — the CEO and chief scientist, respectively, of Manus — were summoned to Beijing this month and questioned over a possible violation of foreign direct investment reporting rules related to the acquisition before being told they could not leave the country, the report said.

Here is more from The Washington Post.  In my view, the American lead in AI is somewhat larger than a model comparison alone might suggest.

Ryan Hauser interviews me in print

Here is the link, here is one excerpt:

What was your path into AI, and what are you working on now?

I first became interested in AI when I saw the chess computer Tinker Belle wheeled into a New Jersey chess tournament in I think 1975. I followed the Kasparov matches closely, and the more general progress of AI in chess. I read chess master David Levy telling me that chess was far too intuitive for computers ever to do well. He was wrong, and then I realized that AI could be intuitive and creative too. That was a long time ago.

In 2013 I published a book on the future of AI called Average is Over. I feel it has predicted our current time very accurately. I also taught Asimov’s I, Robot – a work far ahead of its time – for twenty years.

Right now I am simply working to keep afloat and to stay abreast of recent AI developments. I blog and write columns on the topic frequently, and have regular visits to the major labs. I encourage universities to experiment with AI education.

I mention William Byrd and Paul McCartney as well.

What should I ask David Baszucki?

Yes I will be doing a Conversation with him.  From Wikipedia:

David Brent Baszucki (/bəˈzki/ buh-ZOO-ki; born January 20, 1963) is a Canadian-born American entrepreneur, engineer, and software developer. He is best known as the co-founder and CEO of Roblox Corporation. He co-founded and was the CEO of Knowledge Revolution, which was acquired by MSC Software in December 1998.

On Roblox:

Roblox (/ˈr.blɒks/ ROH-bloks) is an online game platform and game creation system developed by Roblox Corporation that allows users to program and play games created by themselves or other users. It was created by David Baszucki and Erik Cassel in 2004, and released to the public in 2006. As of February 2025, the platform has reported an average of 85.3 million daily active users. According to the company, their monthly player base includes half of all American children under the age of 16.

So what should I ask him?

Some more slow take-off, driven by start-ups

So far, however, the predictions that the mass automation of coding will leave outsourcing firms obsolete seem overblown. Their clients often hope AI will create huge productivity gains by, for example, using the technology to quickly and cheaply build a new internal HR tool. But such improvements in productivity are only possible in “greenfield” environments with “clean architecture”, argues Atul Soneja, chief operating officer at Tech Mahindra, an IT firm. Deploying AI in “brownfield” environments—with legacy code, a lack of documentation and multiple systems that must all continue to operate in real time—is far trickier. In the end, clients often realise that their AI dreams were too ambitious and end up hiring as many outsourced coders as before, say executives.

What is more, the AI boom may present an opportunity for the consultancy arms of India’s outsourcers. They argue that they can now fulfil more of a strategic role for their clients: getting the most out of AI requires understanding all of the context around the problem, something that consultants with experience across businesses can offer. Nandan Nilekani, one of the founders of Infosys, reckons that such services related to AI could be worth $300bn-400bn by 2030.

Here is more from The Economist.