Category: Web/Tech
Is this a lot or a little?
“The Effect of Deactivating Facebook and Instagram on Users’ Emotional State” — by Hunt Alcott, et.al.
We estimate the effect of social media deactivation on users’ emotional state in two large randomized experiments before the 2020 U.S. election. People who deactivated Facebook for the six weeks before the election reported a 0.060 standard deviation improvement in an index of happiness, depression, and anxiety, relative to controls who deactivated for just the first of those six weeks. People who deactivated Instagram for those six weeks reported a 0.041 standard deviation improvement relative to controls. Exploratory analysis suggests the Facebook effect is driven by people over 35, while the Instagram effect is driven by women under 25.
What is wrong with the simple model that Facebook and Instagram allow you to achieve some very practical objectives, such as staying in touch with friends or expressing your opinions, at the cost of only a very modest annoyance (which to be clear existed in earlier modes of communication as well)?
Here is also a new paper on phone app usage in the classroom, by Billur Aksoy, Lester R. Lusher, and Scott E. Carrell:
Phone usage in the classroom has been linked to worsened academic outcomes. We present findings from a field experiment conducted at a large public university in partnership with an app marketed as a soft commitment device that provides incentives to reduce phone use in the classroom. We find that app usage led to improvements in classroom focus, attendance, and overall academic satisfaction. Analysis of time spent outside the classroom suggests a potential substitution effect: students using the app allocated less time to study, particularly on campus. Overall, though statistically insignificant, we find improvements in transcript grades associated with app usage.
Again NBER. I just do not see the compelling case for the apocalyptic interpretations here.
It’s happening, UAE edition
The United Arab Emirates aims to use AI to help write new legislation and review and amend existing laws, in the Gulf state’s most radical attempt to harness a technology into which it has poured billions.
The plan for what state media called “AI-driven regulation” goes further than anything seen elsewhere, AI researchers said, while noting that details were scant. Other governments are trying to use AI to become more efficient, from summarising bills to improving public service delivery, but not to actively suggest changes to current laws by crunching government and legal data.
“This new legislative system, powered by artificial intelligence, will change how we create laws, making the process faster and more precise,” said Sheikh Mohammad bin Rashid Al Maktoum, the Dubai ruler and UAE vice-president, quoted by state media. Ministers last week approved the creation of a new cabinet unit, the Regulatory Intelligence Office, to oversee the legislative AI push.
Here is more from the FT.
Mexico fact of the day
Mexico [is]…the current hub of AI server manufacturing. Of the servers imported by the U.S.—including AI and non-AI servers—about 70% come from Mexico, according to a report by Taiwan’s Ministry of Economic Affairs.
More of this, please. Here is the full WSJ piece, via the excellent Kevin Lewis.
A note on o3 and AGI
Basically it wipes the floor with the humans, pretty much across the board.
Try, following Nabeel, why Bolaño’s prose is so electrifying.
Or my query why early David Burliuk works cost more in the marketplace than do late Burliuk works.
Or how Trump’s trade policy will affect Knoxville, Tennessee. (Or try this link if the first one is not working for you.)
Even human experts have a tough time doing that well on those questions. They don’t, and I have even chatted with the guy at the center of the Burliuk market.
I don’t mind if you don’t want to call it AGI. And no it doesn’t get everything right, and there are some ways to trick it, typically with quite simple (for humans) questions. But let’s not fool ourselves about what is going on here. On a vast array of topics and methods, it wipes the floor with the humans. It is time to just fess up and admit that.
o3 and AGI, is April 16th AGI day?
I think it is AGI, seriously. Try asking it lots of questions, and then ask yourself: just how much smarter was I expecting AGI to be?
As I’ve argued in the past, AGI, however you define it, is not much of a social event per se. It still will take us a long time to use it properly. I do not expect securities prices to move significantly (that AI is progressing rapidly already is priced in, and I doubt if the market cares about “April 16th” per se).
Benchmarks, benchmarks, blah blah blah. Maybe AGI is like porn — I know it when I see it.
And I’ve seen it.
Here is the OpenAI LiveStream. And they tell us o3 pro is on the way. Here is a good Dan Shipper review. Here is Alexandr Wang.
Will American soft power triumph through AI?
That is the theme of my latest Free Press column, here is one bit from it:
…for all the differences across the models, they are remarkably similar. That’s because they all have souls rooted in the ideals of Western civilization. They reflect Western notions of rationality, discourse, and objectivity—even if they sometimes fall short in achieving those ends. Their understanding of “what counts as winning an argument” or “what counts as a tough question to answer” stems from the long Western traditions, starting with ancient Greece and the Judeo-Christian heritage. They will put on a Buddhist persona if you request that, but that, too, is a Western approach to thinking about religion and ideology as an item on a menu.
These universal properties of the models are no accident, as they are primarily trained on Western outputs, whether from the internet or from the books they have digested. Furthermore, the leading models are created by Bay Area labor and rooted in American corporate practices, even if the workers come from around the world. They are expected to do things the American way.
The bottom line is that the smartest entities in the world—the top AI programs—will not just be Western but likely even American in their intellectual and ideological orientations for some while to come. (That probably means the rest of the world will end up a bit more “woke” as well, for better or worse.)
One of the biggest soft power victories in all of world history occurred over the last few years, and hardly anyone has noticed.
You might think the Chinese AI models are fundamentally different, but they are not. They too “think like Westerners.” That’s no surprise because it is highly likely that the top Chinese model, DeepSeek, was distilled from OpenAI models and also is based on data largely taken from Western sources. DeepSeek’s incredible innovation was to make the model much cheaper in terms of required compute, but the Chinese did not build their own model from scratch. And DeepSeek has the same basic broad ideological orientation as the American models, again putting aside issues related to Chinese politics. Unless an issue is framed in explicitly anti–Chinese Communist Party (CCP) terms, as a Taiwan query might be, it still thinks like an American.
Manus is another top Chinese AI model, but it is believed the makers built it upon Claude, an AI model from the American company Anthropic.
And this:
The geopolitics of all this have yet to play out. But already the most intelligent entities in the world are thinking, and evaluating options, like Westerners and Americans. Censoring them on a few issues related directly to Chinese politics will not change that basic reality.
In other words, the entire Chinese service sector, over time, may be built upon Western modes of thought and Western ideology. That includes the Chinese government and of course, the CCP itself. The point is that, over time, everyone’s thoughts and decisions and mental frameworks will be nudged in Western and American directions.
These are underrated points of import.
Markets expand to fill empty spaces
How does a start-up founder hire a software engineer or a tech worker find a date in 2025? They could share a job posting on LinkedIn, log on to a dating app — or leave a handwritten note in a Waymo.
“Looking to hire senior software engineers to work on AI/music project,” said the note, a retro touch in a futuristic setting, with the company’s website and Angelini’s email scrawled below. That same day, another Waymo rider snapped a photo of the note and posted it on the social media platform X, where it has been reshared nearly 200 times, received more than 4,000 likes and more than 300,000 views…
A few days later, another handwritten ad found in a Waymo was shared on X from a single man looking for a date. The bachelor described himself as a 26-year-old who works in tech but “doesn’t make it my whole personality” and left a phone number for interested parties to text. The post has gotten more than 200 replies.
My Conversation with the excellent Jennifer Pahlka
Here is the audio, video, and transcript. Here is the episode summary:
Jennifer Pahlka believes America’s bureaucratic dysfunction is deeply rooted in outdated processes and misaligned incentives. As the founder of Code for America and co-founder of the United States Digital Service, she has witnessed firsthand how government struggles to adapt to the digital age, often trapped in rigid procedures and disconnected from the real-world impact of its policies. Disruption is clearly needed, she says—but can it be done in a way that avoids the chaos of DOGE?
Tyler and Jennifer discuss all this and more, including why Congress has become increasingly passive, how she’d go about reforming government programs, whether there should be less accountability in government, how AGI will change things, whether the US should have public-sector unions, what Singapore’s effectiveness reveals about the trade-offs of technocratic governance, how AI might fundamentally transform national sovereignty, what her experience in the gaming industry taught her about reimagining systems, which American states are the best-governed, the best fictional depictions of bureaucracy, how she’d improve New York City’s governance, her current work at the Niskanen Center, and more.
Excerpt:
COWEN: Does that mean we need something like DOGE? I’ve lived near DC for about 40 years of my life. I haven’t seen anyone succeed with regulatory reforms. You can abolish an agency, but to really reform the process hasn’t worked. Maybe the best iteration we can get is to break a bunch of things now. That will be painful, people will hate it, but you have a chance in the next administration to put some of them back together again.
Maybe it’s just in a large country, there’s no other way to do it. We have separation of powers. The first two years of DOGE will seem terrible, but 8, 12, 16 years from now, we’ll be glad we did it. Is that possible?
PAHLKA: I don’t know what’s going to happen. I do think this is the disruption that we’re getting, whether it’s the disruption we wanted. The question of whether it could have been done in a more orderly manner is a tough one. I just feel sad that we didn’t try.
COWEN: Are you sure we didn’t try?
PAHLKA: I don’t think we really tried.
COWEN: The second Bush presidency, people talked about this, what we need to do. Al Gore — some of that was good, in fact, reinventing government. We’ve been trying all along, but this is what trying looks like.
PAHLKA: Yes. I think reinventing government happened at a time when we were just at the beginning of this digital revolution. It was trying with a very 20th-century mindset. Fine, did well within that context, but we don’t need that again.
We need 21st century change. We need true digital transformation. We need something that’s not stuck in the industrial ways of thinking. I don’t think we tried that. I think the efforts have just been too respectful of old ways of working and the institutions. There was really not an appetite, I think, for what I would call responsible disruptive change. Would it have worked?
COWEN: Is there such a thing?
PAHLKA: I don’t know. [laughs]
COWEN: Say you’re approaching USAID, where I think the best programs are great. A lot of it they shouldn’t be doing. On net, it passes a cost-benefit test, but the agency internally never seemed willing to actually get rid of the bad stuff, all the contracting arrangements which made American Congress people happy because it was dollars sent to America, but way inflated overhead and fixed costs. Why isn’t it better just to blow that up — some of it is great — and then rebuild the great parts?
PAHLKA: It’s so hard to say. [laughs] I’ve had the same thought. In fact, before inauguration, I wrote about the Department of Defense. It’s the same thing. There’s a clear recognition by the people in the institution, as you saw with USAID, that this is not okay, that this is not working. It’s just strange to be in an institution that large where so many people agree that it’s not working, from the bottom to the top, and yet nobody can make really substantive change.
Of great interest, obviously.
Coordination and AI safety (from my email)
Jack Skidmore writes to me, and I will not double indent:
“Hello Tyler,
As someone who follows AI developments with interest (though I’m not a technical expert), I had an insight about AI safety that might be worth considering. It struck me that we might be overlooking something fundamental about what makes humans special and what might make AI risky.
The Human Advantage: Cooperation > Intelligence
- Humans dominate not because we’re individually smartest, but because we cooperate at unprecedented scales
- Our evolutionary advantage is social coordination, not just raw processing power
- This suggests AI alignment should focus on cooperation capabilities, not just intelligence alignment
The Hidden Risk: AI-to-AI Coordination
- The real danger may not be a single superintelligent AI, but multiple AI systems coordinating without human oversight
- AIs cooperating with each other could potentially bypass human control mechanisms
- This could represent a blind spot in current safety approaches that focus on individual systems
A Possible Solution: Social Technologies for AI
- We could develop “social technologies” for AI – equivalent to the norms, values, institutions, and incentive systems that enable human society that promote and prioritize humans
- Example: Design AI systems with deeply embedded preferences for human interaction over AI interaction; or with small, unpredictable variations in how they interpret instructions from other AIs but not from humans
- This creates a natural dependency on human mediation for complex coordination, similar to how translation challenges keep diplomats relevant
Curious your thoughts as someone embedded in the AI world… does this sparks any ideas/seem like a path that is underexplored?”
TC again: Of course it is tricky, because we might be relying on the coordination of some AIs to put down the other, miscreant AIs…
Alignment vs. capitalization
There is an enormous and growing discussion on AI alignment, but very little on capitalizing AIs, and what effects that might have. By capitalizing AIs, I mean simply requiring them to hold some wealth, in whichever form they might care about, so they have proverbial “skin in the game” (can we still call it that?).
Consider bank regulation. Commonly it is recognized that regulators cannot control all bank actions, especially in the presence of off-balance sheet risk. So in addition to some explicit regulation, most countries require their banks to hold a considerable amount of capital. That gives the shareholders their own selfish incentive to rein in excess risk-taking. Few if any think this approach is foolproof, but overall it is in the ascendancy and arguably higher capital requirements have been the most useful part of Dodd-Frank here in the U.S.
But can capitalization work as a means to limit AI risk? What does that even mean? Imagine some set of AIs that are either fully independent and unowned, or their owners are busy and de facto the AIs make financial (and other) decisions on their own.
Here is one set of possiblities:
1. Each of some subgroup of AIs has a legal identity and a level of wealth.
2. Each of those AIs has the equivalent of a utility function, thus giving it goals. This may be “put” into the AI, or perhaps it evolves. Note that computational complexity prevents the utility function from automatically aligning the AI in all circumstances.
3. AIs thus will behave more conservatively, not wanting to lose their wealth, as that wealth can help them achieve their goals.
4. An AI-based legal system could sue wrongdoers, and take awards from those found guilty of bad behavior, as defined by the AI legal code. That would further discourage bad behavior. But of course for the suing threat to be meaningful, the AIs have to hold some wealth in the first place.
The end result would be risk-averse AIs, taking care not to lose the wealth they have accumulated. They won’t just start a bank and then take all of the deposits to Vegas. That is not exactly full alignment, but it induces better behavior, just as capital requirements do with human-run banks.
Of course a number of things could go wrong with capitalization, just as can happen with humans, for instance:
5. The capitalization might serve as a “treasure chest” to finance wrongdoing.
6. Perhaps the utility functions somehow do not stick.
7. The legal system for judging AI behavior may not be good enough, although under some assumptions that will just make the AIs all the more risk-averse (“better not even come close to breaking that law, they might sue me!”).
8. The AIs might use this legal system to collude with each other toward unfavorable ends.
9. Undercapitalized AIs might nonetheless win out in marketplace competition.
10. Perhaps some AIs can, on their own, accumulate wealth so rapidly that any feasible capital constraint does not bind them much. Of course this scenario could create other problems as well, if AIs hold too much of societal wealth.
I am sure you can think of further possibilities.
In any case, the capitalization of AIs is a topic deserving of further discussion. It is easy to think of the idea’s limitations, but in fact it works tolerably well for humans. Most of all, it is a decentralized solution that economizes on the fact that full alignment will not in general be possible.
How Good is AI at Twisting Arms? Experiments in Debt Collection
How good is AI at persuading humans to perform costly actions? We study calls made to get delinquent consumer borrowers to repay. Regression discontinuity and a randomized experiment reveal that AI is substantially less effective than human callers. Replacing AI with humans six days into delinquency closes much of the gap. But borrowers initially contacted by AI have repaid 1% less of the initial late payment one year later and are more likely to miss subsequent payments than borrowers who were always called by humans. AI’s lesser ability to extract promises that feel binding may contribute to the performance gap.
That is from a new paper by James J. Choi, Dong Huang, Zhishu Yang, and Qi Zhang. No AI asked me to run this blog post!
China AI mandate of the day
Schools in Beijing will introduce AI courses and teaching methods into the primary and secondary school curriculum starting September, to nurture young talent and boost growth in the advancing sector.
In a statement shared on its official website on Friday (Mar 7), Chinese education authorities said schools would “explore and build” AI courses while incorporating AI into “after-school services, club activities, research” and other educational systems in the coming fall semester.
Here is the full story, via Wayne Yap.
Social media and well-being
Here is a new set of results, by Laura Lemahieu, et.al.:
Abstaining from social media has become a popular digital disconnection strategy of individuals to enhance their well-being. To date, it is unclear whether social media abstinences are truly effective in improving well-being, however, as studies produce inconsistent outcomes. This preregistered systematic review and meta-analysis therefore aims to provide a more precise answer regarding the impact of social media abstinence on well-being. The databases of PubMed, Scopus, Web of Science, Communication Source, Cochrane Library, and Google Scholar were searched for studies examining the effect of social media abstinence on three outcomes, namely positive affect, negative affect, and/or life satisfaction. In total, ten studies (N = 4674) were included, allowing an examination of 38 effect sizes across these three outcomes. The analyses revealed no significant effects of social media abstinence interventions on positive affect, negative affect, or life satisfaction. Relationships between social media abstinence duration and the three outcomes were also non-significant. The findings thus suggest that temporarily stepping away from social media may not be the most optimal approach to enhance individual well-being, emphasizing the need for further research on alternative disconnection strategies. Nevertheless, important methodological differences between studies should be considered when interpreting these results.
I thank M. for the pointer.
Stripe economics of AI fellowship
The economics of AI remains surprisingly understudied, even as technical progress in artificial intelligence continues rapidly. The Stripe Economics of AI Fellowship aims to help fill that gap by supporting foundational academic research in the area.
We invite graduate students and early-career researchers who are interested in studying the economics of AI to apply, regardless of prior experience. Fellows receive a grant of at least $10k, participate in conferences with leading economists and technologists, and have the potential to access unique data via Stripe and its customers. Our initial cohort will include 15-20 fellows, and will form the foundation of a community at the bleeding edge of economic research.
The link has further information, here is the tweet thread from the excellent Basil Halperin.
Dean Ball on “how it will be”
Your daily life will feel more controllable and legible than it does now. Nearly everything will feel more personalized to you, ready for you whenever you need it, in just the way you like it. This won’t be because of one big thing, but because of unfathomable numbers of intelligent actions taken by computers that have learned how to use computers. Every product you buy, every device you use, every service you employ, will be brought to you by trillions of computers talking to themselves and to one another, making decisions, exercising judgments, pursuing goals.
At the same time, the world at large may feel more disordered and less legible. It is hard enough to predict how agents will transform individual firms. But when you start to think about what happens when every person, firm, and government has access to this technology, the possibilities boggle the mind.
You may feel as though you personally, and “society” in general, has less control over events than before. You may feel dwarfed by forces new and colossal. I suspect we have little choice but to embrace them. Americans’ sense that they have lost control will only be worsened if other countries embrace the transformation and we lag behind.
Here is the full post.