Category: Web/Tech
The collapse of teen fertility in the digital era
Teen fertility collapsed globally starting around 2007. This affected countries across the income and policy spectrum. This paper argues that smartphones changed how teens spend time with each other, and that this change in turn drove the collapse in teen fertility. Once enough teens are on the phone, being on the phone is where the peer network is; in-person time falls sharply, and with it the unstructured contact in which most unintended teen conceptions occur. A coordination model formalizes this tipping: as the smartphone price falls, the in-person equilibrium ceases to exist and the economy moves to a phone-mediated one. Within the United States, terrainruggedness variation in broadband and 4G coverage identifies a causal effect on teen fertility, and time-use diaries show in-person socializing among teens roughly halving while digital leisure roughly tripled. A parallel design for England and Wales recovers the same acceleration and the same effect of mobile coverage on teen conceptions, ruling out country-specific contraceptive-access and welfare-reform stories. The model predicts that the shift towards the phone-mediated equilibrium affects multiple aspects of teen behavior. The same instrument that produces a collapse in teen fertility produces a surge in teen suicides.
That is from a recent paper by Nathan Hudson and Hernan Moscoso Boedo.
My very charming Conversation with Craig Newmark
Here is the audio, video, and transcript. Here is part of the episode summary:
Tyler and Craig discuss why webpage design has gotten worse for 30 years, what Craig’s “obsessive customer service disorder” taught him about human nature, why trusting people and maintaining a nine-second rule for scams aren’t as contradictory as they sound, why roommate ads are a better way to find love, why Craigslist never added seller evaluations, why Leonard Cohen speaks to him more than Bob Dylan, what William Gibson’s Neuromancer got right about the internet, why Jackson Lamb is now one of his role models, why large foundations lose accountability, what two painful Ivy League grants taught him philanthropy, what he gets from rescuing pigeons, the hard lesson he learned about confronting people who lie for a living, his favorite TV shows and movies, the one genuine luxury he can’t go without, what he still needs to learn, and much more.
Excerpt:
COWEN: What is scarce in your life then? You’re giving away money. You don’t have to run the company on a day-to-day basis. We’d all like more years to live, but what is it that if you had more of it, you could be more effective with?
NEWMARK: I guess, ideally, I would have more social skills—meaning, some.
COWEN: We’re simulating social skills just fine here.
NEWMARK: That’s the phrase I use. At least on my part, what looks like social skills is just fakery. I can do it for short amounts of time, maybe 90 minutes. I’ve given up, though, on actually accumulating social skills, getting better at it. More to the point, I try to get into positions where other people can show social skills.
COWEN: One journalist once described you as having “obsessive customer service disorder.” Isn’t that a social skill?
NEWMARK: That’s more obsession, so it’s pathological, but a good one. I believe that you should treat people like you want to be treated. Think of the many times that you needed customer service. Sometimes you can get good customer service, but that’s the exception. That’s no reason for us not to provide a good customer service. Like earlier today, someone sent in a grant proposal, and I had to tell them that they forgot to sign the thing, a very minor thing. More importantly, I’m telling people they need to do some planning for good communications because their work is much less valuable if they can’t talk about it effectively.
COWEN: According to Susan Freese, who wrote about you, in one year, you answered 40,000 customer service emails. Is that possibly true? If so, what did you learn about humanity doing that?
Recommended, charming and engaging throughout.
talkie: an LM from 1930
Here is the link, with explanation.
Will AI end anonymity?
Like many journalists, I have a bunch of unpublished fiction lying about, so I tried Claude on the first chapter of a romance novel that I started almost 20 years ago, during the hysterical, mawkish phase of a particularly bad breakup. “Megan McArdle,” said Opus 4.7, after a few seconds of thought. Fascinated, I kept feeding it smaller and smaller passages to see how little prose it needed for identification. The answer, apparently, was 1,441 words…
Would Claude do better or worse with something more modern? I fed Claude a different opening chapter from an unpublished science fiction novel I started right before the pandemic — I contain multitudes — and this time Claude needed only 1,132 words. The eulogy I gave for my mother, lightly edited to remove some too-specific biographical details, was even faster: Depending on the passage, Claude was able to peg me as the author in as few as 124 words.
Here is more from Megan McArdle.
Generative AI and Entrepreneurship
This paper studies how Generative AI (Gen AI) is reshaping the U.S. startup ecosystem. Exploiting the release of ChatGPT, we show that startups with greater pre-release Gen AI task exposure reduced employment within two quarters, primarily among junior and implementation roles. Displaced workers experienced longer unemployment spells and moved to lower-paying but less exposed jobs. Conversely, exposed startups increased productivity, scaled faster, and accelerated through financing rounds. Venture capital shifted toward frequent, smaller investments, boosting new firm formation. Overall, incumbent contraction was offset by new firm formation, leaving aggregate employment unchanged but shifting composition to senior roles.
That is from a new and important paper by Abhinav Gupta, Franklin Qian, Elena Simintz, & Yifan Sun.
From the UAE
Under the directives of the President of the UAE, we launch a new government model.
Within two years, 50% of government sectors, services, and operations will run on Agentic AI, making the UAE the first government globally to operate at this scale through autonomous systems.
AI is no longer a tool. It analyses, decides, executes, and improves in real time. It will become our executive partner to enhance services, accelerate decisions, and raise efficiency.
This transformation has a clear timeline. Two years. Performance across government will be measured by speed of adoption, quality of implementation, and mastery of AI in redesigning government work.
We are investing in our people. Every federal employee will be trained to master AI, building one of the world’s strongest capabilities in AI-driven government.
Implementation will be overseen by Sheikh Mansour bin Zayed, with a dedicated taskforce chaired by Mohammad Al Gergawi driving execution.
The world is changing. Technology is accelerating. Our principle remains constant. People come first. Our goal is a government that is faster, more responsive, and more impactful.
Here is the link. While there is typically a certain amount of PR in such pronouncements, I do not think this one is only PR.
Imagegen 2.0
Created by Alex T., and of course GPT as well.
A Comparison of Agentic AI Systems and Human Economists
This paper compares agentic AI systems and human economists performing the same causal inference tasks. AI systems and humans generally obtain similar median causal effect estimates. While there is substantial dispersion of estimates across model instances, the human distributions of estimates have wider tails. Using AI models as reviewers to compare and rank “submissions,” the following ranking emerges regardless of reviewer model: (1) Codex GPT-5.4, (2) Codex GPT-5.3-Codex, (3) Claude Code Opus 4.6, and (4) Human Researchers. These findings suggest that agentic AI systems will allow us to scale empirical research in economics.
I enjoy the name of the author, namely Serafin Grundl. Here is the paper, via Ethan Mollick. You could interpret these results as showing the AIs have fewer hallucinations. And just to reiterate a key point from the paper:
The second part of this paper is an AI review tournament in which “submissions” (codes and write-ups) from humans and the AI models are compared and ranked against each other. The reviewers are the following AI models: Gemini 3.1 Pro Preview, Opus 4.6 and GPT-5.4. For each review the reviewer is asked to write a report comparing four submissions (human, Opus 4.6, GPT-5.3-Codex, GPT-5.4). Each reviewer model writes comparison reports for the same 300 comparison groups. The average rankings are strikingly similar across reviewer models: (1) Codex GPT-5.4, (2) Codex GPT-5.3-Codex, (3) Claude Code Opus 4.6, and 2(4) Human Researchers.
Who comes in last? Hi people!
Self-driving vehicles and the cross-country drive
Following my post on cross-country driving, a reader asked me about this prospect but I suppose I am skeptical.
First, self-driving vehicles make it too easy to read a book or stare at your phone. Driving yourself fixes your attention on what is unfolding before your eyes, and forces you to keep it there. You might be bored for an hour, but you will catch periodic gems by always looking at the road before you and to the side.
Second, at least for a while self-driving vehicles will not be allowed to exceed speed limits. Good luck with that. A lot of America is marked at 25 mph when you can go 36 mph or maybe even 37 mph in a responsible manner.
Third, many of the best moments in cross-country driving come from the unexpected swerve — “hey, that looks interesting!” And half of the time it is not. Will the self-driving vehicle know when you might wish to swerve and pull over?
Fourth, there is something to be said for integrating the rhythms of your body with those of the car. When you drive yourself, you feel the trip in a way the Waymo does not give you. I would stress this point is a negative for most car trips, though perhaps not for a cross-country drive. If you do not enjoy driving through the USA, maybe do not do the cross-country thing at all? Walking through Paris or Istanbul remains a lovely alternative.
Automation and better AI might eventually solve or address some of those problems. But the next available round of self-driving vehicles probably will not.
My dialogue with Jonathan Zittrain
At Harvard Law School, Jonathan is consistently excellent.
Another possible cyberequilibrium? (from my email)
I would not wish to bet on this, but it is an interesting idea:
I wonder if the cyber capabilities of Mythos and future models ultimately lower the returns to ‘hacking,’ perhaps below the point where such efforts are worth investing in.
Say you’re a nefarious actor and uncover a critical, zero-day exploit in an important system. How do you extract the most value from that exploit? There are more valuable and less valuable times to deploy it, and usually the best time won’t be “immediately.” You may only get to deploy it once or a small number of times. You have to consider:
- How long do I expect the vulnerability to persist?
- What material gain do I get by exploiting it at a given time?
- How does exploiting it increase my personal risk (by focusing countermeasures in my direction)?
The answer to (1) is now “a much shorter time than before”, while 2 and 3 are mostly unchanged. In the new world, yes, exploits are much easier to find, but the expected value of a given exploit has also shrunk. The odds of an opportune moment falling within the ‘window of usefulness’ of that exploit are much lower. It’s plausible that the new equilibrium becomes “it’s not even worth spending money to find vulnerabilities in most systems, because the chances of being able to do something useful with it before it’s patched is close to zero.”
Much of the fear around cybersecurity vulnerabilities is something like: our adversaries accumulate a pile of highly damaging (to physical infrastructure, military assets, communication systems, …) exploits, which in the event of a conflict they then rapidly deploy to cause damage. Mythos would seem to favor defense here, because the usable lifetime of any exploit is much shorter. Any cyberattack that is timing-dependent now has lower utility.
Yes, there are more mundane cybersecurity concerns like ransomware or data theft, but these aren’t hugely significant in the scheme of things. And I would expect within a few years we’ll have fairly robust tools for automated vulnerability discovery and patching that any large business that cares about these things can deploy.
No doubt this assumes you can trust those in control of the leading-edge models. But even if you’re a bit behind, the situation may not be so bad. There isn’t an infinite supply of exploits, and again, most of them only need to be found ‘fast enough’ in order to mitigate the damage.
From Jacob Gloudemans.
The wisdom of Roon
renaissance rationalization is a process that commodified itself rapidly: despite the europeans discovering most technology during the early modern period it spread everywhere within a few centuries, and the rate of spread has been increasing dramatically
knowledge of the scientific frontier dissipates around the world faster as science has enabled better communication technologies. it’s getting even faster with INTELLIGENCE technologies which actually explain themselves and help you build them
as we approach more powerful intelligence, the ability to train powerful models is self commodifying rather than building a huge and runaway advantage for a handful of recursive self improvers. this is one reason why you should expect almost all of the benefits of superintelligence to be captured by the public
Here is the tweet. That said, it would be useful to relax constraints on the supply of both energy and land, so that the benefits could diffuse more widely yet.
Financial Regulation and AI: A Faustian Bargain?
Important work is just flowing these days, and much of it (of course) concerns AI:
We study whether AI methods applied to large-scale portfolio holdings data can improve financial regulation. We build a state-of-the-art, graph-based deep learning model tailored to security-level data on the holdings of financial intermediaries. The architecture incorporates economic priors and learns latent representations of both assets and investors from the network structure of portfolio positions. Applied to the universe of non-bank financial intermediaries, covering nearly $40 trillion in wealth, the model substantially outperforms existing approaches in out-of-sample forecasts of intermediary trading behavior, including in crisis episodes. The model has more than ten times the explanatory power for the cross-sectional variation in asset returns during stress events compared to traditional approaches, and it outperforms existing systemic risk metrics at the institution level. Its learned representations show that the holdings network encodes rich, economically interpretable information about firesale vulnerability. The architecture is fully inductive, producing informative estimates even when entire asset classes or investors are withheld from training. We embed our empirical approach into a macroprudential optimal policy framework to formalize why these objects matter for policy and welfare. We show that even in an equilibrium environment subject to the Lucas critique, the predictive information from the model improves welfare by sharpening the cross-sectional targeting of policy interventions, and we demonstrate a complementarity between prediction and structural knowledge.
That is a new paper by Christopher Clayton and Antonio Coppola, of Yale and Stanford respectively.
Andy Hall advice on AI and economic research
Here is the document, excerpt:
In January, I released the results of an experiment showing how Claude Code could helpfully extend old papers “automagically.” It was pretty astonishing to me. Claude was able to come up with a plan, scrape the web, write code, run regressions, create tables and figures, and write a whole memo on what it had found—all in about 45 minutes.
Are AI tools perfect? No. Claude made some interesting mistakes in that extension, and since then, I’ve seen it make a whole bunch more. Are human researchers perfect, though? Hell no.
The evidence that AI tools should now be an essential part of your toolkit is overwhelming—look at the recent work that my Stanford colleague Yiqing Xu has put out, for example, which allows for the automated verification of empirical research. This is so clearly valuable. When it comes to empirical work, we’re never going back to the pre-AI world.
Here is a thread on the paper, heedworthy throughout. If you do not have some kind of decent plan here, other economists will leave you in the dust. Even if it is only a minority of “other economists” their total leverage and impact will be extreme.
Advice for economics graduate students (and faculty?) vis-a-vis AI
From Isiah Andrews, via Emily Oster and the excellent Samir Varma. A good piece, though I think it needs to more explicitly consider the most likely case, namely that the models are better at all intellectual tasks, including “taste,” or whatever else might be knockin’ around in your noggin…I am still seeing massive copium. But the models still are not able to “operate in the actual world as a being.” Those are the complementarities you need to be looking for, namely how you as a physical entity can enhance the superpowers of your model, or should I express that the other way around? That might include gathering data in the field, persuading a politician, or raising money. I am sure you can think of examples on your own.