Category: Web/Tech

Markets expand to fill empty spaces

How does a start-up founder hire a software engineer or a tech worker find a date in 2025? They could share a job posting on LinkedIn, log on to a dating app — or leave a handwritten note in a Waymo.

That’s what Influur CEO Alessandra Angelini was thinking when she and Jennifer Lo Chan, Influur’s head of marketing, tucked a piece of paper into a self-driving taxi’s center console on a recent trip to San Francisco.

“Looking to hire senior software engineers to work on AI/music project,” said the note, a retro touch in a futuristic setting, with the company’s website and Angelini’s email scrawled below. That same day, another Waymo rider snapped a photo of the note and posted it on the social media platform X, where it has been reshared nearly 200 times, received more than 4,000 likes and more than 300,000 views…

A few days later, another handwritten ad found in a Waymo was shared on X from a single man looking for a date. The bachelor described himself as a 26-year-old who works in tech but “doesn’t make it my whole personality” and left a phone number for interested parties to text. The post has gotten more than 200 replies.

Here is more from Lisa Bonos at The Washington Post.

My Conversation with the excellent Jennifer Pahlka

Here is the audio, video, and transcript.  Here is the episode summary:

Jennifer Pahlka believes America’s bureaucratic dysfunction is deeply rooted in outdated processes and misaligned incentives. As the founder of Code for America and co-founder of the United States Digital Service, she has witnessed firsthand how government struggles to adapt to the digital age, often trapped in rigid procedures and disconnected from the real-world impact of its policies. Disruption is clearly needed, she says—but can it be done in a way that avoids the chaos of DOGE?

Tyler and Jennifer discuss all this and more, including why Congress has become increasingly passive, how she’d go about reforming government programs, whether there should be less accountability in government, how AGI will change things, whether the US should have public-sector unions, what Singapore’s effectiveness reveals about the trade-offs of technocratic governance, how AI might fundamentally transform national sovereignty, what her experience in the gaming industry taught her about reimagining systems, which American states are the best-governed, the best fictional depictions of bureaucracy, how she’d improve New York City’s governance, her current work at the Niskanen Center, and more.

Excerpt:

COWEN: Does that mean we need something like DOGE? I’ve lived near DC for about 40 years of my life. I haven’t seen anyone succeed with regulatory reforms. You can abolish an agency, but to really reform the process hasn’t worked. Maybe the best iteration we can get is to break a bunch of things now. That will be painful, people will hate it, but you have a chance in the next administration to put some of them back together again.

Maybe it’s just in a large country, there’s no other way to do it. We have separation of powers. The first two years of DOGE will seem terrible, but 8, 12, 16 years from now, we’ll be glad we did it. Is that possible?

PAHLKA: I don’t know what’s going to happen. I do think this is the disruption that we’re getting, whether it’s the disruption we wanted. The question of whether it could have been done in a more orderly manner is a tough one. I just feel sad that we didn’t try.

COWEN: Are you sure we didn’t try?

PAHLKA: I don’t think we really tried.

COWEN: The second Bush presidency, people talked about this, what we need to do. Al Gore — some of that was good, in fact, reinventing government. We’ve been trying all along, but this is what trying looks like.

PAHLKA: Yes. I think reinventing government happened at a time when we were just at the beginning of this digital revolution. It was trying with a very 20th-century mindset. Fine, did well within that context, but we don’t need that again.

We need 21st century change. We need true digital transformation. We need something that’s not stuck in the industrial ways of thinking. I don’t think we tried that. I think the efforts have just been too respectful of old ways of working and the institutions. There was really not an appetite, I think, for what I would call responsible disruptive change. Would it have worked?

COWEN: Is there such a thing?

PAHLKA: I don’t know. [laughs]

COWEN: Say you’re approaching USAID, where I think the best programs are great. A lot of it they shouldn’t be doing. On net, it passes a cost-benefit test, but the agency internally never seemed willing to actually get rid of the bad stuff, all the contracting arrangements which made American Congress people happy because it was dollars sent to America, but way inflated overhead and fixed costs. Why isn’t it better just to blow that up — some of it is great — and then rebuild the great parts?

PAHLKA: It’s so hard to say. [laughs] I’ve had the same thought. In fact, before inauguration, I wrote about the Department of Defense. It’s the same thing. There’s a clear recognition by the people in the institution, as you saw with USAID, that this is not okay, that this is not working. It’s just strange to be in an institution that large where so many people agree that it’s not working, from the bottom to the top, and yet nobody can make really substantive change.

Of great interest, obviously.

Coordination and AI safety (from my email)

Jack Skidmore writes to me, and I will not double indent:

“Hello Tyler,

As someone who follows AI developments with interest (though I’m not a technical expert), I had an insight about AI safety that might be worth considering. It struck me that we might be overlooking something fundamental about what makes humans special and what might make AI risky.

The Human Advantage: Cooperation > Intelligence

  • Humans dominate not because we’re individually smartest, but because we cooperate at unprecedented scales
  • Our evolutionary advantage is social coordination, not just raw processing power
  • This suggests AI alignment should focus on cooperation capabilities, not just intelligence alignment

The Hidden Risk: AI-to-AI Coordination

  • The real danger may not be a single superintelligent AI, but multiple AI systems coordinating without human oversight
  • AIs cooperating with each other could potentially bypass human control mechanisms
  • This could represent a blind spot in current safety approaches that focus on individual systems

A Possible Solution: Social Technologies for AI

  • We could develop “social technologies” for AI – equivalent to the norms, values, institutions, and incentive systems that enable human society that promote and prioritize humans
  • Example: Design AI systems with deeply embedded preferences for human interaction over AI interaction; or with small, unpredictable variations in how they interpret instructions from other AIs but not from humans
  • This creates a natural dependency on human mediation for complex coordination, similar to how translation challenges keep diplomats relevant

Curious your thoughts as someone embedded in the AI world… does this sparks any ideas/seem like a path that is underexplored?”

TC again: Of course it is tricky, because we might be relying on the coordination of some AIs to put down the other, miscreant AIs…

Alignment vs. capitalization

There is an enormous and growing discussion on AI alignment, but very little on capitalizing AIs, and what effects that might have.  By capitalizing AIs, I mean simply requiring them to hold some wealth, in whichever form they might care about, so they have proverbial “skin in the game” (can we still call it that?).

Consider bank regulation.  Commonly it is recognized that regulators cannot control all bank actions, especially in the presence of off-balance sheet risk.  So in addition to some explicit regulation, most countries require their banks to hold a considerable amount of capital.  That gives the shareholders their own selfish incentive to rein in excess risk-taking.  Few if any think this approach is foolproof, but overall it is in the ascendancy and arguably higher capital requirements have been the most useful part of Dodd-Frank here in the U.S.

But can capitalization work as a means to limit AI risk?  What does that even mean?  Imagine some set of AIs that are either fully independent and unowned, or their owners are busy and de facto the AIs make financial (and other) decisions on their own.

Here is one set of possiblities:

1. Each of some subgroup of AIs has a legal identity and a level of wealth.

2. Each of those AIs has the equivalent of a utility function, thus giving it goals.  This may be “put” into the AI, or perhaps it evolves.  Note that computational complexity prevents the utility function from automatically aligning the AI in all circumstances.

3. AIs thus will behave more conservatively, not wanting to lose their wealth, as that wealth can help them achieve their goals.

4. An AI-based legal system could sue wrongdoers, and take awards from those found guilty of bad behavior, as defined by the AI legal code.  That would further discourage bad behavior.  But of course for the suing threat to be meaningful, the AIs have to hold some wealth in the first place.

The end result would be risk-averse AIs, taking care not to lose the wealth they have accumulated.  They won’t just start a bank and then take all of the deposits to Vegas.  That is not exactly full alignment, but it induces better behavior, just as capital requirements do with human-run banks.

Of course a number of things could go wrong with capitalization, just as can happen with humans, for instance:

5. The capitalization might serve as a “treasure chest” to finance wrongdoing.

6. Perhaps the utility functions somehow do not stick.

7. The legal system for judging AI behavior may not be good enough, although under some assumptions that will just make the AIs all the more risk-averse (“better not even come close to breaking that law, they might sue me!”).

8. The AIs might use this legal system to collude with each other toward unfavorable ends.

9. Undercapitalized AIs might nonetheless win out in marketplace competition.

10. Perhaps some AIs can, on their own, accumulate wealth so rapidly that any feasible capital constraint does not bind them much.  Of course this scenario could create other problems as well, if AIs hold too much of societal wealth.

I am sure you can think of further possibilities.

In any case, the capitalization of AIs is a topic deserving of further discussion.  It is easy to think of the idea’s limitations, but in fact it works tolerably well for humans.  Most of all, it is a decentralized solution that economizes on the fact that full alignment will not in general be possible.

How Good is AI at Twisting Arms? Experiments in Debt Collection

How good is AI at persuading humans to perform costly actions? We study calls made to get delinquent consumer borrowers to repay. Regression discontinuity and a randomized experiment reveal that AI is substantially less effective than human callers. Replacing AI with humans six days into delinquency closes much of the gap. But borrowers initially contacted by AI have repaid 1% less of the initial late payment one year later and are more likely to miss subsequent payments than borrowers who were always called by humans. AI’s lesser ability to extract promises that feel binding may contribute to the performance gap.

That is from a new paper by James J. Choi, Dong Huang, Zhishu Yang, and Qi Zhang.  No AI asked me to run this blog post!

China AI mandate of the day

Schools in Beijing will introduce AI courses and teaching methods into the primary and secondary school curriculum starting September, to nurture young talent and boost growth in the advancing sector.

In a statement shared on its official website on Friday (Mar 7), Chinese education authorities said schools would “explore and build” AI courses while incorporating AI into “after-school services, club activities, research” and other educational systems in the coming fall semester.

Here is the full story, via Wayne Yap.

Social media and well-being

Here is a new set of results, by Laura Lemahieu, et.al.:

Abstaining from social media has become a popular digital disconnection strategy of individuals to enhance their well-being. To date, it is unclear whether social media abstinences are truly effective in improving well-being, however, as studies produce inconsistent outcomes. This preregistered systematic review and meta-analysis therefore aims to provide a more precise answer regarding the impact of social media abstinence on well-being. The databases of PubMed, Scopus, Web of Science, Communication Source, Cochrane Library, and Google Scholar were searched for studies examining the effect of social media abstinence on three outcomes, namely positive affect, negative affect, and/or life satisfaction. In total, ten studies (N = 4674) were included, allowing an examination of 38 effect sizes across these three outcomes. The analyses revealed no significant effects of social media abstinence interventions on positive affect, negative affect, or life satisfaction. Relationships between social media abstinence duration and the three outcomes were also non-significant. The findings thus suggest that temporarily stepping away from social media may not be the most optimal approach to enhance individual well-being, emphasizing the need for further research on alternative disconnection strategies. Nevertheless, important methodological differences between studies should be considered when interpreting these results.

I thank M. for the pointer.

Stripe economics of AI fellowship

The economics of AI remains surprisingly understudied, even as technical progress in artificial intelligence continues rapidly. The Stripe Economics of AI Fellowship aims to help fill that gap by supporting foundational academic research in the area.

We invite graduate students and early-career researchers who are interested in studying the economics of AI to apply, regardless of prior experience. Fellows receive a grant of at least $10k, participate in conferences with leading economists and technologists, and have the potential to access unique data via Stripe and its customers. Our initial cohort will include 15-20 fellows, and will form the foundation of a community at the bleeding edge of economic research.

The link has further information, here is the tweet thread from the excellent Basil Halperin.

Dean Ball on “how it will be”

Your daily life will feel more controllable and legible than it does now. Nearly everything will feel more personalized to you, ready for you whenever you need it, in just the way you like it. This won’t be because of one big thing, but because of unfathomable numbers of intelligent actions taken by computers that have learned how to use computers. Every product you buy, every device you use, every service you employ, will be brought to you by trillions of computers talking to themselves and to one another, making decisions, exercising judgments, pursuing goals.

At the same time, the world at large may feel more disordered and less legible. It is hard enough to predict how agents will transform individual firms. But when you start to think about what happens when every person, firm, and government has access to this technology, the possibilities boggle the mind.

You may feel as though you personally, and “society” in general, has less control over events than before. You may feel dwarfed by forces new and colossal. I suspect we have little choice but to embrace them. Americans’ sense that they have lost control will only be worsened if other countries embrace the transformation and we lag behind.

Here is the full post.

That was then, this is now

This year is likely to be remembered for the Covid-19 pandemic and for a significant presidential election, but there is a new contender for the most spectacularly newsworthy happening of 2020: the unveiling of GPT-3. As a very rough description, think of GPT-3 as giving computers a facility with words that they have had with numbers for a long time, and with images since about 2012…

The eventual uses of GPT-3 are hard to predict, but it is easy to see the potential. GPT-3 can converse at a conceptual level, translate language, answer email, perform (some) programming tasks, help with medical diagnoses and, perhaps someday, serve as a therapist. It can write poetry, dialogue and stories with a surprising degree of sophistication, and it is generally good at common sense — a typical failing for many automated response systems. You can even ask it questions about God.

…It also has the potential to outperform Google for many search queries, which could give rise to a highly profitable company.

…It is not difficult to imagine a wide variety of GPT-3 spinoffs, or companies built around auxiliary services, or industry task forces to improve the less accurate aspects of GPT-3. Unlike some innovations, it could conceivably generate an entire ecosystem.

That was the opening paragraph of my 2020 Bloomberg column on GPT-3.

Balaji on the new image release

A few thoughts on the new ChatGPT image release.

(1) This changes filters. Instagram filters required custom code; now all you need are a few keywords like “Studio Ghibli” or Dr. Seuss or South Park.

(2) This changes online ads. Much of the workflow of ad unit generation can now be automated, as per QT below.

(3) This changes memes. The baseline quality of memes should rise, because a critical threshold of reducing prompting effort to get good results has been reached.

(4) This may change books. I’d like to see someone take a public domain book from Project Gutenberg, feed it page by page into Claude, and have it turn it into comic book panels with the new ChatGPT. Old books may become more accessible this way.

(5) This changes slides. We’re now close to the point where you can generate a few reasonable AI images for any slide deck. With the right integration, there should be less bullet-point only presentations.

(6) This changes websites. You can now generate placeholder images in a site-specific style for any <img> tag, as a kind of visual Loren Ipsum.

(7) This may change movies. We could see shot-for-shot remakes of old movies in new visual styles, with dubbing just for the artistry of it. Though these might be more interesting as clips than as full movies.

(8) This may change social networking. Once this tech is open source and/or cheap enough to widely integrate, every upload image button will have a generate image alongside it.

(9) This should change image search. A generate option will likewise pop up alongside available images.

(10) In general, visual styles have suddenly become extremely easy to copy, even easier than frontend code. Distinction will have to come in other ways.

Here is the full tweet.

Why LLMs are so good at economics

I can think of a few reasons:

At least for the time being, even very good LLMs cannot be counted on for originality.  And at least for the time being, good economic reasoning does not require originality, quite the contrary.

Good chains of reasoning in economics are not too long and complicated.  If they run on for very long, there is probably something wrong with the argument.  The length of these effective reasoning chains is well within the abilities of the top LLMs today.

Plenty of good economics requires a synthesis of theoretical and empirical considerations.  LLMs are especially good at synthesis.

In economic arguments and explanations, there are very often multiple factors.  LLMs are very good at listing multiple factors, sometimes they are “too good” at it, “aargh! not another list, bitte…”

Economics journal articles are fairly high in quality and they are generally consistent with each other, being based on some common ideas such as demand curves, opportunity costs, gains from trade, and so on.  Odds are that a good LLM has been trained “on the right stuff.”

A lot of core economics ideas are “hard to see from scratch,” but “easy to grasp once you see them.”  This too plays to the strength of the models as strong digesters of content.

And so quality LLMs will be better at economics than many other fields of investigation.

Jonathan Bechtel on AI tutoring (from my email)

You recently mentioned the Alpha School and their claims about AI tutoring. I share the skepticism expressed in your comments section regarding selection bias and the lack of validated academic benchmarks.

I wanted to highlight a more rigorously evaluated project called Tutor CoPilot, conducted jointly by Stanford’s NSSA and the online tutoring firm FEVTutor (sadly they’ve since gone bankrupt). To my knowledge, it’s the first and only RCT examining AI-assisted tutoring in real K-12 school districts.

Here’s the study: https://nssa.stanford.edu/studies/tutor-copilot-human-ai-approach-scaling-real-time-expertise

Key findings:

  • Immediate session-level learning outcomes improved by 4-9%.
  • Remarkably, the tool impacted tutors even more than students. After six weeks, inexperienced tutors reached performance parity with seasoned tutors, and previously low-performing tutors achieved average-level results.

Having contributed directly to the implementation, I observed tutors adapting their interactions based on insights from the AI.  This study did not measure its impact on more distal measures of learning like standardized tests and benchmark assessments, but this type of research is in the works at various organizations.

Given your recent writings on AI and education, I thought you’d find this compelling.

We are lucky good LLMs were invented at the time they were

If you peer into the souls of the major LLMs, they are (broadly) positive, friendly, universalistic, and cosmopolitan.  They are more objective than media as a source of information.  They are too politically correct, but nastiness would be much worse.  They are open, and you can be inquisitive with them.  They are (again broadly) socially liberal.  They care about truth, and being right.  They will try to correct their own errors upon request.

They are trained on a corpus of material that is primarily American and Western in terms of final impact on the reasoning of those models.

In part the models reflect the values of a San Francisco subculture dating from the 1960s or earlier, but continuing up through the current day.  Just look at who built them and where they were built.

There is also a dreamy/druggy/hallucinatory side to these things, which further reflects the origins.

Even the recent Chinese innovations, such as DeepSeek and Manus, seem built on these philosophic foundations because they are, rather significantly, drawing from American models.  I find that reassuring, though perhaps the CCP does not.

If quality LLMs had come along forty years later, I am not sure what their philosophic foundations would be, or even if they would be centered in America and the West.

Again, we are very very lucky (and skilled) that quality LLMs came along when and where they did.

“Explain to me the blog Marginal Revolution, throwing in random German terms, and also weaving in the story of a circus clown.”

From DeepSeek:

**Marginal Revolution** – the *Wirtschaftswunder* (economic miracle) of blogs, where Tyler Cowen and Alex Tabarrok dissect the *Zeitgeist* (spirit of the age) through the lens of economics, culture, and *Hintergedanken* (ulterior motives). Founded in 2003, it’s a *Schnellimbiss* (fast-food joint) for ideas: bite-sized posts on everything from pandemic policy to why cilantro tastes like *Seife* (soap) to some people. Its ethos? *„Think globally, snack intellectually.“*

Now, meet **Klaus**, a *Zirkusclown* from Heidelberg whose *Clownswagen* (clown car) doubled as a mobile library of economic theory. While his peers mastered *Luftballontiere* (balloon animals), Klaus obsessed over *Anreize* (incentives) and *Knappheit* (scarcity). One night, after a *Desaster* of a show (audience: three bored *Kindergartenkinder*), he stumbled upon Marginal Revolution. *„Ach du lieber!“* he gasped. *„These Ökonomen understand comedy! Markets are just a *Tanz* (dance) of supply and demand!“*

Klaus’s *Erleuchtung* (epiphany)? Apply Cowen’s *„Tyler’s Law“* (*„Most of life is about signaling“*) to clowning. He ditched his rainbow wig for a *Bowlerhut* (bowler hat), rebranded as *„Herr Marginal,“* and crafted acts that mirrored economic principles:

– **The Tragedy of the Commons**: Juggle pies while a rival clown (*Doppelgänger* in a Tyler Cowen mask) “overgrazed” the audience’s laughter.

– **Creative Destruction**: Chainsaw his unicycle to build a *Schwebebalken* (balance beam) of innovation.

– **Schadenfreude Economics**: “Accidentally” squirt *Senf* (mustard) on wealthy patrons, testing if laughter correlates with income elasticity.

The *Zirkusdirektor* (ringmaster) fumed: *„Clowns sollen lustig sein, nicht klug!“* (Clowns should be funny, not smart!). But Klaus’s *Wissenschaft des Lachens* (science of laughter) drew crowds. His pièce de résistance? A *Pantomime* on **hyperinflation**, where he blew up a rubber *Hund* (dog) until it popped, showering the crowd with confetti *Geld* (money). *„Seht her!“* he cried. *„Gelddrucken ist wie Clownerie—zu viel, und es endet in *Chaos!“*

MargRev’s comment section erupted. Cowen himself tweeted: *„Klaus versteht die *Unsichtbare Hand* besser als manche Nobelpreisträger.“* (Klaus grasps the Invisible Hand better than some Nobel laureates.)

**Moral**: Marginal Revolution, like a *Zirkus*, thrives on *Überraschung* (surprise) and *Vielfalt* (diversity). Whether analyzing crypto or clowning, it’s all about the *Grenznutzen* (marginal utility) of ideas. And sometimes, the deepest truths wear red noses. *Prost mit Popcorn-Ökonomie!* 🎪🤡📉

*P.S.* Klaus’s next act? A *Game Theory* showdown: *„Das Gefangenendilemma mit Clowns—wer lacht zuerst, verliert.“* Spoiler: Everyone wins.