Category: Web/Tech
Discrimination on #EconTwitter
This paper documents discrimination in the formation of professional networks among academic economists. We created 80 bot accounts that claim to be PhD students differing in three characteristics: gender (male or female), race (Black or White), and university affiliation (top- or lower-ranked). The bots randomly followed 6,920 users in the #EconTwitter community. Follow-back rates were 12 percent higher for White students compared to Black students, 21 percent higher for students from top-ranked universities compared to those from lower-ranked institutions, and 25 percent higher for female compared to male students. Notably, the racial gap persists even among students from top-ranked institutions.
That is from a new AERInsights paper by Nicolás Ajzenman, Bruno Ferman, and Pedro C. Sant’Anna. Here is a useful picture from the paper. Being at a top school, or at least pretending to be, is what really matters?
How Retrainable are AI-Exposed Workers?
We document the extent to which workers in AI-exposed occupations can successfully retrain for AI-intensive work. We assemble a new workforce development dataset spanning over 1.6 million job training participation spells from all US Workforce Investment and Opportunity Act programs from 2012–2023 linked with occupational measures of AI exposure. Using earnings records observed before and after training, we compare high AI exposure trainees to a matched sample of similar workers who only received job search assistance. We find that AI-exposed workers have high earnings returns from training that are only 25% lower than the returns for low AI exposure workers. However, training participants who target AI-intensive occupations face a penalty for doing so, with 29% lower returns than AI-exposed workers pursuing more general training. We estimate that between 25% to 40% of occupations are “AI retrainable” as measured by its workers receiving higher pay for moving to more AI-intensive occupations—a large magnitude given the relatively low-income sample of displaced workers. Positive earnings returns in all groups are driven by the most recent years when labor markets were tightest, suggesting training programs may have stronger signal value when firms reach deeper into the skill market.
That is from a new NBER working paper by
Dean Ball on state-level AI laws
He is now out of government and has resumed writing his Substack. Here is one excerpt from his latest:
Several states have banned (see also “regulated,” “put guardrails on” for the polite phraseology) the use of AI for mental health services. Nevada, for example, passed a law (AB 406) that bans schools from “[using] artificial intelligence to perform the functions and duties of a school counselor, school psychologist, or school social worker,” though it indicates that such human employees are free to use AI in the performance of their work provided that they comply with school policies for the use of AI. Some school districts, no doubt, will end up making policies that effectively ban any AI use at all by those employees. If the law stopped here, I’d be fine with it; not supportive, not hopeful about the likely outcomes, but fine nonetheless.
But the Nevada law, and a similar law passed in Illinois, goes further than that. They also impose regulations on AI developers, stating that it is illegal for them to explicitly or implicitly claim of their models that (quoting from the Nevada law):
(a) The artificial intelligence system is capable of providing professional mental or behavioral health care;
(b) A user of the artificial intelligence system may interact with any feature of the artificial intelligence system which simulates human conversation in order to obtain professional mental or behavioral health care; or
(c) The artificial intelligence system, or any component, feature, avatar or embodiment of the artificial intelligence system is a provider of mental or behavioral health care, a therapist, a clinical therapist, a counselor, a psychiatrist, a doctor or any other term commonly used to refer to a provider of professional mental health or behavioral health care.
First there is the fact that the law uses an extremely broad definition of AI that covers a huge swath of modern software. This means that it may become trickier to market older machine learning-based systems that have been used in the provision of mental healthcare, for instance in the detection psychological stress, dementia, intoxication, epilepsy, intellectual disability, or substance abuse (all conditions explicitly included in Nevada’s statutory definition of mental health).
But there is something deeper here, too. Nevada AB 406, and its similar companion in Illinois, deal with AI in mental healthcare by simply pretending it does not exist. “Sure, AI may be a useful tool for organizing information,” these legislators seem to be saying, “but only a human could ever do mental healthcare.”
And then there are hundreds of thousands, if not millions, of Americans who use chatbots for something that resembles mental healthcare every day. Should those people be using language models in this way? If they cannot afford a therapist, is it better that they talk to a low-cost chatbot, or no one at all? Up to what point of mental distress? What should or could the developers of language models do to ensure that their products do the right thing in mental health-related contexts? What is the right thing to do?
The State of Nevada would prefer not to think about such issues. Instead, they want to deny that they are issues in the first place and instead insist that school employees and occupationally licensed human professionals are the only parties capable of providing mental healthcare services (I wonder what interest groups drove the passage of this law?).
AI-engaged economics papers are growing rapidly
…share of economics papers that is ABOUT or USES AI increased 10X to 5% in 5 years and growth is basically vertical.
Be there or be square!
Here is the tweet, here is the underlying paper by Eamon Duede, et.al. Other science are considered as well, I do not need to tell you the results, they consider philosophy too.
Profile of Joe Liemandt and Alpha School
The one thing Liemandt will talk about for hours on end is Alpha School: the teacherless, homeworkless, K-12 private school in Austin, Texas, where students have been testing in the top 0.1% nationally by self-directing coursework with AI tutoring apps for two hours a day. Alpha students are incentivized to complete coursework to “mastery-level” (i.e., scoring over 90%) in only two hours via a mix of various material and immaterial rewards, including the right to spend the other four hours of the school day in “workshops,” learning things like how to run an Airbnb or food truck, manage a brokerage account or Broadway production, or build a business or drone.
Since the explosive debut of Generative AI in 2022, Liemandt has taken $1 billion out of Trilogy/ESW in order to fund and incubate proprietary AI software products at Alpha School, where he has also served quietly as “product guy,” dean of parents, and principal. After collecting a three-year data stream in these roles, while also working in a nearby stealth lab, Liemandt believes he now has “the single best product I’ve ever built, in four decades, by far.” The product is called Timeback, and its purpose, in essence, is to scale Alpha School’s concepts and results—learn 2x in 2 hours, test in the 99th percentile, and then give students the rest of their childhood back—to a billion kids.
Here is the full story by Jeremy Stern.
The AI polity that is Albania?
While the rest of Europe bickers over the safety and scope of artificial intelligence, Albania is tapping it to accelerate its EU accession.
It’s even mulling an AI-run ministry.
Prime Minister Edi Rama mentioned AI last month as a tool to stamp out corruption and increase transparency, saying the technology could soon become the most efficient member of the Albanian government.
“One day, we might even have a ministry run entirely by AI,” Rama said at a July press conference while discussing digitalization. “That way, there would be no nepotism or conflicts of interest,” he argued.
Local developers could even work toward creating an AI model to elect as minister, which could lead the country to “be the first to have an entire government with AI ministers and a prime minister,” Rama added.
While no formal steps have been taken and Rama’s job is not yet officially up for grabs, the prime minister said the idea should be seriously considered…
AI is already being used in the administration to manage the thorny matter of public procurement, an area the EU has asked the government to shore up, as well as to analyze tax and customs transactions in real time, identifying irregularities.
Here is the whole Politico story, via Holger.
“A bunch of economists”
Here is the link.
Data center facts of the day
JLL estimates $170bn of assets will require construction lending or permanent financing this year. Between now and 2029, however, global spending on data centres will hit almost $3tn, according to Morgan Stanley analysts. Of that, just $1.4tn is forecast to come from capital expenditure by Big Tech groups, leaving a mammoth $1.5tn of financing required from investors and developers.
About $60bn of loans are going into roughly $440bn of data centre development projects this year, twice as much debt as in 2024, according to a recent presentation by law firm Norton Rose Fulbright. More than $25bn of loans were underwritten in the first quarter of this year alone, according to a report by Newmark.
Here is more from a very good FT article.
My excellent Conversation with Nate Silver
Here is the audio, video, and transcript. Here is part of the episode summary:
Tyler and Nate dive into expected utility theory and random Nash equilibria in poker, whether Silver’s tell-reading abilities transfer to real-world situations like NBA games, why academic writing has disappointed him, his move from atheism to agnosticism, the meta-rationality of risk-taking, electoral systems and their flaws, 2028 presidential candidates, why he thinks superforecasters will continue to outperform AI for the next decade, why more athletes haven’t come out as gay, redesigning the NBA, what mentors he needs now, the cultural and psychological peculiarities of Bay area intellectual communities, why Canada can’t win a Stanley Cup, the politics of immigration in Europe and America, what he’ll work on next, and more.
Excerpt:
COWEN: If you think about the Manifold types in terms of the framework in your book, how they think about risk — is there a common feature that they’re more risk-averse, or that they worry more? Is there a common feature that they like the idea that they hold some kind of secret knowledge that other people do not have? How do you classify them? They’re just high in openness, or what is it?
SILVER: They’re high in openness to experience. I think they’re very high in conscientiousness.
COWEN: Are they? I don’t know.
SILVER: Some of them are. Some of them are, yes.
COWEN: I think of them as high variance in conscientiousness, rather than high in it.
[laughter]
SILVER: The EAs and the rationalists are more high variance, I think. There can be a certain type of gullibility is one problem. I think, obviously, EA took a lot of hits for Sam Bankman-Fried, but if anything, they probably should have taken more reputational damage. That was really bad, and there were a lot of signs of it, including his interviews with you and other people like that. It contrasts with poker players who have similar phenotypes but are much more suspicious and much more street smart.
Also, the Bay Area is weird. I feel like the West Coast is diverging more from the rest of the country.
COWEN: I agree.
SILVER: It’s like a long way away. Just the mannerisms are different. You go to a small thing. You go to a house party in the Bay Area. There may not be very much wine, for example. In New York, if the host isn’t drinking, then it’d be considered sacrilege not to have plenty of booze at a party. Little things like that, little cultural norms. You go to Seattle — it feels like Canada to me almost, and so these things are diverging more.
COWEN: Why is belief in doom correlated with practice of polyamory? And I think it is.
SILVER: If you ask Aella, I guess, she might say, if we’re all going to die or go to whatever singularity there is, we might as well have fun in the meantime. There’s some of that kind of hedonism. Although in general, it’s not a super hedonistic movement.
COWEN: It seems too economistic to me. Even I, the economist — I don’t feel people think that economistically. There’s more likely some psychological predisposition toward both views.
SILVER: I guess you could argue that society would be better organized in a more polyamorous relationship. People do it implicitly in a lot of ways anyway, including in the LGBTQ [laughs] community, which has different attitudes toward it potentially. and if there’s not as much childbearing, that can have an effect, potentially. I think it’s like they are not being constrained by their own society thing that is taken very seriously in that group. There’s enough disconnectedness and aloofness where they’re able to play it out in practice more.
That creeps a little bit into Silicon Valley too, which can be much more whimsical and fanciful than the Wall Street types I know, for example.
Recommended. Here is my 2024 episode with Nate, here is my 2016 episode with him.
The Rising Returns to R&D: Ideas Are not Getting Harder to Find (one hypothesis)
R&D investment has grown robustly, yet aggregate productivity growth has stagnated. Is this because “ideas are getting harder to find”? This paper uses micro-data from the US Census Bureau to explore the relationship between R&D and productivity in the manufacturing sector from 1976 to 2018. We find that both the elasticity of output (TFP) with respect to R&D and the marginal returns to R&D have risen sharply. Exploring factors affecting returns, we conclude that R&D obsolescence rates must have risen. Using a novel estimation approach, we find consistent evidence of sharply rising technological rivalry and obsolescence. These findings suggest that R&D has become more effective at finding productivity-enhancing ideas, but these ideas may also render rivals’ technologies obsolete, making innovations more transient. Because of obsolescence, rising R&D does not necessarily mean rising aggregate productivity growth.
Here is the paper by Yoshiki Ando (Singapore Management University, TPRI), James Bessen (BU, TPRI), and Xiupeng Wang. Via Arjun.
David Sacks is correct
A BEST CASE SCENARIO FOR AI?
The Doomer narratives were wrong. Predicated on a “rapid take-off” to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence.
Instead, we are seeing the opposite:
— the leading models are clustering around similar performance benchmarks;
— model companies continue to leapfrog each other with their latest versions (which shouldn’t be possible if one achieves rapid take-off);
— models are developing areas of competitive advantage, becoming increasingly specialized in personality, modes, coding and math as opposed to one model becoming all-knowing.
None of this is to gainsay the progress. We are seeing strong improvement in quality, usability, and price/performance across the top model companies. This is the stuff of great engineering and should be celebrated. It’s just not the stuff of apocalyptic pronouncements. Oppenheimer has left the building.
The AI race is highly dynamic so this could change. But right now the current situation is Goldilocks:
— We have 5 major American companies vigorously competing on frontier models. This brings out the best in everyone and helps America win the AI race. As @BalajiS
has written: “We have many models from many factions that have all converged on similar capabilities, rather than a huge lead between the best model and the rest. So we should expect a balance of power between various human/AI fusions rather than a single dominant AGI that will turn us all into paperclips/pillars of salt.”
— So far, we have avoided a monopolistic outcome that vests all power and control in a single entity. In my view, the most likely dystopian outcome with AI is a marriage of corporate and state power similar to what we saw exposed in the Twitter Files, where “Trust & Safety” gets weaponized into government censorship and control. At least when you have multiple strong private sector players, that gets harder. By contrast, winner-take-all dynamics are more likely to produce Orwellian outcomes.
— There is likely to be a major role for open source. These models excel at providing 80-90% of the capability at 10-20% of the cost. This tradeoff will be highly attractive to customers who value customization, control, and cost over frontier capabilities. China has gone all-in on open source, so it would be good to see more American companies competing in this area, as OpenAI just did. (Meta also deserves credit.)
— There is likely to be a division of labor between generalized foundation models and specific verticalized applications. Instead of a single superintelligence capturing all the value, we are likely to see numerous agentic applications solving “last mile” problems. This is great news for the startup ecosystem.
— There is also an increasingly clear division of labor between humans and AI. Despite all the wondrous progress, AI models are still at zero in terms of setting their own objective function. Models need context, they must be heavily prompted, the output must be verified, and this process must be repeated iteratively to achieve meaningful business value. This is why Balaji has said that AI is not end-to-end but middle-to-middle. This means that apocalyptic predictions of job loss are as overhyped as AGI itself. Instead, the truism that “you’re not going to lose your job to AI but to someone who uses AI better than you” is holding up well.
In summary, the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.
Here is the tweet link. As you have read here, I am quite pleased with GPT-5. But it does not indicate that the more extreme (whether destructive or utopian) scenarios for AI development are correct, quite the contrary. Below the Sacks tweet, you can read some rather unconvincing responses.
Is anyone worth a billion dollars?
That is the topic of my latest Free Press column. Excerpt:
…in recent years they [Meta] have moved into AI in a big way. Over that same time period, the valuation of the company has risen from about $236 billion in November 2022 to almost $2 trillion at the end of this July.
The reasons for share price movements are not always transparent, but it is common consensus that AI investments are a significant reason why Meta is now worth much more. The original metaverse plans did not take off, and Facebook and Instagram are relatively mature products and they have not changed much as of late.
So the market, responding to Meta’s promises about AI, expects that it will deliver on that $2 trillion value. Yet their current Llama models are not state of the art. Meta needs something better and more competitive.
Meta thus has to justify an extra $1.8 trillion in its valuation, which of course they could lose if markets decide they are not up to the task. Spending some billions on top-quality AI personnel is easy to justify when viewed in terms of the value gain Meta already has been reaping.
And it is not just about justifying the current $2 trillion valuation. Meta possibly could be worth more yet. It probably has not escaped their attention that as of late, both Nvidia and Microsoft have had valuations of about $4 trillion. So the possibility of further upside enters the equation as well.
Keep in mind that better AI also will boost the profits Meta can receive from ads on Facebook and Instagram. Click-through rates on ads typically are small, so even a modest increase in targeting ability can mean a lot more profit. Meta does not have to achieve superintelligence to get its money back on these investments; they just need better AI. There is also a plan to put more ads on WhatsApp (currently the user experience is mostly ad-free), and that too can benefit from better AI and better ad targeting.
The general principle is that top talent is typically undervalued, if only because of egalitarian norms in pay structures.
GPT-5, a short and enthusiastic review
I am a big fan, as on my topics of interest it does much better than o3, and that is saying something. It is also lightning fast, even for complex queries of economics, history, and ideas.
One of the most impressive features is its uncanny sense of what you might want to ask next. And it has a good sense for when to give you a (sometimes interactive!) chart or diagram. It is a much better writer than o3.
I have had early access, and love to just keep on asking it, asking it, asking it questions. Today I was asking about Irish coinage disputes from 1724 (Swift) and now about different kinds of Buddhism and their historical roots. It was very accurate on cuisine in northern Ghana.
It is the best learning tool I have. Furthermore, it feels fun.
Here is a review from Ethan Mollick.
What is superintelligence?
The Free Press ran a symposium on that topic, I was a contributor. Here is the closing part of mine:
“I asked o3 pro, a leading AI model from OpenAI, “What is superintelligence?” Here is the opening to a much longer answer:
Superintelligence is a term most commonly used in artificial intelligence (AI) studies and the philosophy of mind to denote any intellect that greatly outperforms the best human brains in virtually every relevant domain—from scientific creativity and social skills to general wisdom and strategic reasoning.
Is that a superintelligent answer? No. Is it better than mine? The AI thinks it is a toss-up: “Cowen’s version excels at pithiness and viewpoint; mine excels at breadth and rigor. Neither is universally ‘better’; each serves different reader needs.”
I can breathe easily, at least for today.”
The Prime Minister of Sweden asks AI for advice in his job “quite often”
Here is the Reddit discussion.