Category: Web/Tech
Self-driving vehicles and the cross-country drive
Following my post on cross-country driving, a reader asked me about this prospect but I suppose I am skeptical.
First, self-driving vehicles make it too easy to read a book or stare at your phone. Driving yourself fixes your attention on what is unfolding before your eyes, and forces you to keep it there. You might be bored for an hour, but you will catch periodic gems by always looking at the road before you and to the side.
Second, at least for a while self-driving vehicles will not be allowed to exceed speed limits. Good luck with that. A lot of America is marked at 25 mph when you can go 36 mph or maybe even 37 mph in a responsible manner.
Third, many of the best moments in cross-country driving come from the unexpected swerve — “hey, that looks interesting!” And half of the time it is not. Will the self-driving vehicle know when you might wish to swerve and pull over?
Fourth, there is something to be said for integrating the rhythms of your body with those of the car. When you drive yourself, you feel the trip in a way the Waymo does not give you. I would stress this point is a negative for most car trips, though perhaps not for a cross-country drive. If you do not enjoy driving through the USA, maybe do not do the cross-country thing at all? Walking through Paris or Istanbul remains a lovely alternative.
Automation and better AI might eventually solve or address some of those problems. But the next available round of self-driving vehicles probably will not.
My dialogue with Jonathan Zittrain
At Harvard Law School, Jonathan is consistently excellent.
Another possible cyberequilibrium? (from my email)
I would not wish to bet on this, but it is an interesting idea:
I wonder if the cyber capabilities of Mythos and future models ultimately lower the returns to ‘hacking,’ perhaps below the point where such efforts are worth investing in.
Say you’re a nefarious actor and uncover a critical, zero-day exploit in an important system. How do you extract the most value from that exploit? There are more valuable and less valuable times to deploy it, and usually the best time won’t be “immediately.” You may only get to deploy it once or a small number of times. You have to consider:
- How long do I expect the vulnerability to persist?
- What material gain do I get by exploiting it at a given time?
- How does exploiting it increase my personal risk (by focusing countermeasures in my direction)?
The answer to (1) is now “a much shorter time than before”, while 2 and 3 are mostly unchanged. In the new world, yes, exploits are much easier to find, but the expected value of a given exploit has also shrunk. The odds of an opportune moment falling within the ‘window of usefulness’ of that exploit are much lower. It’s plausible that the new equilibrium becomes “it’s not even worth spending money to find vulnerabilities in most systems, because the chances of being able to do something useful with it before it’s patched is close to zero.”
Much of the fear around cybersecurity vulnerabilities is something like: our adversaries accumulate a pile of highly damaging (to physical infrastructure, military assets, communication systems, …) exploits, which in the event of a conflict they then rapidly deploy to cause damage. Mythos would seem to favor defense here, because the usable lifetime of any exploit is much shorter. Any cyberattack that is timing-dependent now has lower utility.
Yes, there are more mundane cybersecurity concerns like ransomware or data theft, but these aren’t hugely significant in the scheme of things. And I would expect within a few years we’ll have fairly robust tools for automated vulnerability discovery and patching that any large business that cares about these things can deploy.
No doubt this assumes you can trust those in control of the leading-edge models. But even if you’re a bit behind, the situation may not be so bad. There isn’t an infinite supply of exploits, and again, most of them only need to be found ‘fast enough’ in order to mitigate the damage.
From Jacob Gloudemans.
The wisdom of Roon
renaissance rationalization is a process that commodified itself rapidly: despite the europeans discovering most technology during the early modern period it spread everywhere within a few centuries, and the rate of spread has been increasing dramatically
knowledge of the scientific frontier dissipates around the world faster as science has enabled better communication technologies. it’s getting even faster with INTELLIGENCE technologies which actually explain themselves and help you build them
as we approach more powerful intelligence, the ability to train powerful models is self commodifying rather than building a huge and runaway advantage for a handful of recursive self improvers. this is one reason why you should expect almost all of the benefits of superintelligence to be captured by the public
Here is the tweet. That said, it would be useful to relax constraints on the supply of both energy and land, so that the benefits could diffuse more widely yet.
Financial Regulation and AI: A Faustian Bargain?
Important work is just flowing these days, and much of it (of course) concerns AI:
We study whether AI methods applied to large-scale portfolio holdings data can improve financial regulation. We build a state-of-the-art, graph-based deep learning model tailored to security-level data on the holdings of financial intermediaries. The architecture incorporates economic priors and learns latent representations of both assets and investors from the network structure of portfolio positions. Applied to the universe of non-bank financial intermediaries, covering nearly $40 trillion in wealth, the model substantially outperforms existing approaches in out-of-sample forecasts of intermediary trading behavior, including in crisis episodes. The model has more than ten times the explanatory power for the cross-sectional variation in asset returns during stress events compared to traditional approaches, and it outperforms existing systemic risk metrics at the institution level. Its learned representations show that the holdings network encodes rich, economically interpretable information about firesale vulnerability. The architecture is fully inductive, producing informative estimates even when entire asset classes or investors are withheld from training. We embed our empirical approach into a macroprudential optimal policy framework to formalize why these objects matter for policy and welfare. We show that even in an equilibrium environment subject to the Lucas critique, the predictive information from the model improves welfare by sharpening the cross-sectional targeting of policy interventions, and we demonstrate a complementarity between prediction and structural knowledge.
That is a new paper by Christopher Clayton and Antonio Coppola, of Yale and Stanford respectively.
Andy Hall advice on AI and economic research
Here is the document, excerpt:
In January, I released the results of an experiment showing how Claude Code could helpfully extend old papers “automagically.” It was pretty astonishing to me. Claude was able to come up with a plan, scrape the web, write code, run regressions, create tables and figures, and write a whole memo on what it had found—all in about 45 minutes.
Are AI tools perfect? No. Claude made some interesting mistakes in that extension, and since then, I’ve seen it make a whole bunch more. Are human researchers perfect, though? Hell no.
The evidence that AI tools should now be an essential part of your toolkit is overwhelming—look at the recent work that my Stanford colleague Yiqing Xu has put out, for example, which allows for the automated verification of empirical research. This is so clearly valuable. When it comes to empirical work, we’re never going back to the pre-AI world.
Here is a thread on the paper, heedworthy throughout. If you do not have some kind of decent plan here, other economists will leave you in the dust. Even if it is only a minority of “other economists” their total leverage and impact will be extreme.
Advice for economics graduate students (and faculty?) vis-a-vis AI
From Isiah Andrews, via Emily Oster and the excellent Samir Varma. A good piece, though I think it needs to more explicitly consider the most likely case, namely that the models are better at all intellectual tasks, including “taste,” or whatever else might be knockin’ around in your noggin…I am still seeing massive copium. But the models still are not able to “operate in the actual world as a being.” Those are the complementarities you need to be looking for, namely how you as a physical entity can enhance the superpowers of your model, or should I express that the other way around? That might include gathering data in the field, persuading a politician, or raising money. I am sure you can think of examples on your own.
My podcast with Russ Roberts on AI and education
On his EconTalk podcast, self-recommending…
Economists on AI and economic growth and employment
We completed the most comprehensive study of how economists and AI experts think AI will affect the U.S. economy. They predict major AI progress—but no dramatic break from economic trends: GDP growth rates similar to today’s and a moderate decline in labor force participation. However, when asked to consider what would happen in a world with extremely rapid progress in AI capabilities by 2030, they predict significant economic impacts by 2050:
• Annualized GDP growth of 3.5% (compared to 2.4% in 2025)
• A labor force participation rate of 55% (roughly 10 million fewer jobs)
• 80% of wealth held by the top 10% (highest since 1939)
That is from this very good and very detailed Twitter thread, worth reading in its entirety. Note this:
Only 5.2% of the variance is between scenarios—attributable to disagreement about AI capabilities themselves…
Here is the full paper, over 200 pages long, I will be reading through it. The list of authors is impressive, with Ezra Karger in the lead, also including Kevin Bryan, Basil Halperin, and many more. For some while this will stand as the best set of estimates we have. Here are the related forecasts of Seb Krier.
Is financial economics still economics?
That all sounded wonderful, and that core model and its offshoots dominated financial research for decades. The problem, however, was that it wasn’t true, or at least it wasn’t nearly as true as we had thought and hoped. When financial economists refined the models with more complete specifications, it turned out Beta didn’t predict stock returns much at all. Eugene Fama and Kenneth French delivered one of the final blows to earlier approaches with a 1992 paper that showed Beta didn’t have explanatory power over expected returns at all. Since Fama himself was one of the original architects of CAPM-like reasoning, and French also was a renowned finance economist, these revisions to the model were credible. For all its original promise, marginalism, and the concomitant notion of diminishing marginal utility, no longer seemed to help explain asset returns.
Under one plausible account of intellectual history, you can date the decline of marginalism to that 1992 paper. In the most rigorous, data-oriented, and highest-paying field of economics, namely finance, marginalist constructs had every chance to succeed. In fact, they ran the board for several decades. But over time they failed. In the most prestigious field of economics, marginalism has been in full retreat for over 30 years, and it shows no signs of making a comeback.
We already know that financial practice is dominated by the (non-economist) quants. But how about financial economics research, the parts that are still done by economists? What direction is that work moving in?
I was struck by a 2024 paper published in the Journal of Financial Economics, one of the two leading journals of financial economics (Journal of Finance is the other). The authors are Scott Murray, Yusen Xia, and Houping Xiao, and the title is “Charting by Machines.” The core result is pretty simple, and best expressed in the well-written abstract:
“We test the efficient market hypothesis by using machine learning to forecast stock returns from historical performance. These forecasts strongly predict the cross-section of future stock returns. The predictive power holds in most subperiods and is strong among the largest 500 stocks. The forecasting function has important nonlinearities and interactions, is remarkably stable through time, and captures effects distinct from momentum, reversal and extant technical signals. These findings question the efficient market hypothesis and indicate that technical analysis and charting have merit. We also demonstrate that machine learning models that perform well in optimization continue to perform well out-of-sample.” Murray, Xia, and Xiao (2024, p. 1). Or consider the new paper Borri, Chetverikov, Liu, and Tsyvinski (2024). They propose a new non-linear, single-factor asset pricing model. In the abstract: “Most known finance and macro factors become insignificant controlling for our single-factor.” Yet you won’t find traditional economic variables discussed in this paper, it is all about the math, in particular a representation of the Kolmogorov-Arnold representation theorem.
In other words, the successful approach to predicting returns is giving up on traditional portfolio theory and using the “theory-less” technique of machine learning. Although this is published in the Journal of Financial Economics, in some significant sense it is not economic reasoning at all. It is calculation, combined with expertise in math and computer science. The modeling is not economic modeling in a manner that has ties to marginalism or standard intuitive microeconomic theory. And the work is predicting excess returns in a pretty robust and successful way…
There is a recent working paper which is perhaps more striking yet, by Antoine Didisheim, Shikun (Barry) Ke, Bryan T. Kelly, and Semyon Malamud. They pick up from Arbitrage Pricing Theory (APT), a well-established idea from financial economics. APT typically looks for “factors” in the data which predict excess returns, and a traditional APT model might have found five or six such factors. Are “inflation” or perhaps “the term structure of interest rates” useful factors? Well, that can be debated, but if so, those results sound pretty intuitive. But those intuitions seem to be disappearing. In a paper by these authors, they apply machine learning methods to look for more factors. As we know, machine learning is very good at finding non-obvious relationships in the data. The largest model they built has 360,000 (!) factors, and it reduces pricing errors by 54.8 percent relative to the classic six-factor model from Fama and French. Bravo to the authors, but what kinds of intuitions do you think possibly can be supported by those 360,000 factors?
That is from my new The Marginal Revolution: Rise and Decline, and the Pending Revolution in AI.
I appear on the Coleman Hughes podcast
A reminder (for academics)
Yes, there are skills AIs haven’t mastered. But if your skill still appears to be the exclusive province of humans, that might mean the major AI companies do not yet consider it very important to master right away. Eventually it will rise to the top of the list.
Here is more from my Free Press essay on AI. If not for the copied passage, it seems no one was noticing this book review? (NYT, read the emendation)
Sentences to ponder
This matters for the AI question, and the book leaves it unfinished. If the breakthroughs of the past required social conditions, not just cognitive capacity, then what does it mean when the next breakthroughs are produced by systems that have no social conditions at all? A neural net does not need a university chair or financial independence from the church. It does not need to reorganize its commitments. It does not, in any recognizable sense, have commitments. The machine that replaces the marginalist is not a better marginalist. It is a different kind of thing entirely.
That is from Jônadas Techio, presumably with LLMs, this review of The Marginal Revolution is interesting throughout. And this:
Maybe the book demonstrates only that Cowen personally remains good at something the field no longer needs.
*The AI Doc*
The subtitle of the movie is Or How I Became an Apocaloptimist, and here is the trailer.
Overall this film was better and smarter than I was expecting. Intelligent people were allowed to speak, and to present various sides of the issue. It was also interesting to see how various people one knows come across on the big screen.
It is easy enough to mock the final section of the movie, which calls for a participatory “civil rights” movement on AI, negotiations with China, and a big voice for trade unions in the decisions. What Dan Klein calls “the people’s romance.” The Straussian read there is correct, even though it probably was not intended by the moviemakers. In reality, for better or worse, the final decisions will continue to be made by the national security establishment.
On a weekend, there were five other people in the theater.
Is Tinder actually OK?
Online dating apps have transformed the dating market, yet their broader effects remain unclear. We study Tinder’s impact on college students using its initial marketing focus on Greek organizations for identification. We show that the full-scale launch of Tinder led to a sharp, persistent increase in sexual activity, but with little corresponding impact on the formation of long-term relationships or relationship quality. Dating outcome inequality, especially among men, rose, alongside rates of sexual assault and STDs. However, despite these changes, Tinder’s introduction did not worsen students’ mental health on average and may have even led to improvements for female students.
That is from a new paper published in AEJ: Applied Economics, by Berkeren Büyükeren, Alexey Makarin, and Heyu Xiong.