Category: Web/Tech

My excellent Conversation with Cass Sunstein

Cass was in top form, and so we went on for almost two hours.  In his Substack he described it as “The most fun interview I have ever done.”  Here is the audio, video, and transcript.  Here is part of the episode summary:

Tyler and Cass discuss whether liberalism is self-undermining or simply vulnerable to illiberal forces, the tensions in how a liberal immigration regime would work, whether new generations of liberal thinkers are emerging, if Derek Parfit counts as a liberal, Mill’s liberal wokeism, the allure of Mises’ “cranky enthusiasm for freedom,” whether the central claim of The Road to Serfdom holds up, how to blend indigenous rights with liberal thought, whether AIs should have First Amendment protections, the argument for establishing a right not to be manipulated, better remedies for low-grade libel, whether we should have trials run by AI, how Bob Dylan embodies liberal freedom, Cass’ next book about animal rights, and more.

I will reproduce the section Cass pulled for his own Substack:

COWEN: Now, we started with the topic of liberalism. How is it you think about or characterize the liberalism of Bob Dylan?

SUNSTEIN: Bob Dylan is a liberal. His liberalism is captured in the line, “He not busy being born is busy dying.” I hope he’s immortal, but if anything is on his epigraph, that would be a good candidate.

The notion of self-invention, of freedom, is central to basically everything. His refusal to keep singing the same song — you can hear him talking about it in some of the interviews. He said, “I could do that. I could just do that forever. I knew how they’d react.” He said, “What’s that about?” He said, “I needed to do something else.” But of course, the line, “I needed to do something else” — that’s my line. How he would put it would be much more vivid and surprising than that.

His “Like a Rolling Stone” is an anthem of freedom. I heard it, actually, in concert a few years ago. It was a great performance. It wasn’t young, but it was a great performance. The audience went wild when he did “Like a Rolling Stone.” That was the final song. It was the encore. It wasn’t just because it was the greatest rock song ever written. It was because of how he did it. I thought, “What’s going on in this song? Why is everyone exhilarated?” The song, which he described when he wrote it as vomit, hatred directed at somewhere that was real — it wasn’t that, or it was a little bit that, but it was a song of liberty.

“How does it feel to be on your own with no direction home, like a complete unknown, like a rolling stone?” Everyone felt like they were flying. He makes that — “Like a Rolling Stone” — be a song of freedom. If you look at his angry songs — “Positively 4th Street” — there’s a freedom in being, of course, uninhibited, able to say things, but also a freedom of disconnection.

When he’s asked why did he change his name, I have an account of why he actually did. I think he gave it exactly once, but in his more characteristic way, he said, “This is America. You can change your name.” Then he said, “I was born. I didn’t think I was born with the right name. I could make it up. I could say that sounds more like I was.”

Making rootlessness not be a curse, but instead something that is . . . the word joy is too clichéd for Dylan. If you look at his love songs, like “If You See Her, Say Hello,” which isn’t one of my favorites, but it’s good. There’s a connection with the one he loved, who got away, but you can feel the sense of freedom.

COWEN: “Visions of Johanna”?

SUNSTEIN: Yes, completely. He’s torn. That has the great opening line. “Ain’t it just like the night to play tricks When you’re trying to be so quiet?” Did Yeats write better lines than that? Probably, but he was Yeats.

COWEN: Blood on the Tracks — a liberal album?

SUNSTEIN: Oh, yes.

COWEN: How would you express that?

SUNSTEIN: Well, I’m thinking “Buckets of Rain” is the closing song. Right before that, there’s a song, “You’re Gonna Make Me Lonesome When You Go.” That’s it, which is, I think, one of his greatest songs. That’s a liberal song of freedom and separation, that she’s going, but he’s going to see her everywhere, and there’s smiling at impermanence. That is a big liberal theme — smiling at impermanence — because impermanence makes things not routine and also makes for freedom.

COWEN: “Idiot Wind” is the angry song of the batch, right?

SUNSTEIN: Yes, it’s pretty mad. He said about that song, “I don’t know why people like it. There’s so much sadness and distress in it.”

COWEN: Do you see your own liberalism or just yourself in the liberalism of Bob Dylan?

SUNSTEIN: I think so.

COWEN: Reinventing yourself, not quite wanting to be pinned down, doing a lot of stuff.

SUNSTEIN: He likes, I think, abandoning and going on to something that’s very different. I wish I’d gone electric or had some equivalent of that. But doing something quite different — I do share a little bit with him. I like it when I think something I thought was wrong. I now am very enthusiastic about the Austrian economists and Hayek. I’ve always admired them, of course, but I didn’t feel that they were on my team. Now I feel I’ve gone to their team. I don’t feel ashamed that I was wrong before. I feel excited that I’m less wrong now.

Definitely recommended, I could have pulled out many other parts as well.  Again, I am happy to recommend Cass’s new book Liberalism: In Defense of Freedom.

My very fun Conversation with Blake Scholl

Here is the audio, video, and transcript.  This was at a live event (the excellent Roots of Progress conference), so it is only about forty minutes, shorter than usual.  Here is the episode summary:

Blake Scholl is one of the leading figures working to bring back civilian supersonic flight. As the founder and CEO of Boom Supersonic, he’s building a new generation of supersonic aircraft and pushing for the policies needed to make commercial supersonic travel viable again. But he’s equally as impressive as someone who thinks systematically about improving dysfunction—whether it’s airport design, traffic congestion, or defense procurement—and sees creative solutions to problems everyone else has learned to accept.

Tyler and Blake discuss why airport terminals should be underground, why every road needs a toll, what’s wrong with how we board planes, the contrasting cultures of Amazon and Groupon, why Concorde and Apollo were impressive tech demos but terrible products, what Ayn Rand understood about supersonic transport in 1957, what’s wrong with aerospace manufacturing, his heuristic when confronting evident stupidity, his technique for mastering new domains, how LLMs are revolutionizing regulatory paperwork, and much more.

Excerpt:

COWEN: There’s plenty about Boom online and in your interviews, so I’d like to take some different tacks here. This general notion of having things move more quickly, I’m a big fan of that. Do you have a plan for how we could make moving through an airport happen more quickly? You’re in charge. You’re the dictator. You don’t have to worry about bureaucratic obstacles. You just do it.

SCHOLL: I think about this in the shower like every day. There is a much better airport design that, as best I can tell, has never been built. Here’s the idea: You should put the terminals underground. Airside is above ground. Terminals are below ground. Imagine a design with two runways. There’s an arrival runway, departure runway. Traffic flows from arrival runway to departure runway. You don’t need tugs. You can delete a whole bunch of airport infrastructure.

Imagine you pull into a gate. The jetway is actually an escalator that comes up from underneath the ground. Then you pull forward, so you can delete a whole bunch of claptrap that is just unnecessary. The terminal underground should have skylights so it can still be incredibly beautiful. If you model fundamentally the thing on a crossbar switch, there are a whole bunch of insights for how to make it radically more efficient. Sorry. This is a blog post I want to write one day. Actually, it’s an airport I want to build.

And;

COWEN: I’m at the United desk. I have some kind of question. There’s only two or three people in front of me, but it takes forever. I notice they’re just talking back and forth to the assistant. They’re discussing the weather or the future prospects for progress, total factor productivity. I don’t know. I’m frustrated. How can we make that process faster? What’s going wrong there?

SCHOLL: The thing I most don’t understand is why it requires so many keystrokes to check into a hotel room. What are they writing?

What are they writing?

Is the AI sector currently a bubble?

Possibly, but do not jump to that conclusion too quickly, as I argued in my latest Free Press column.  Excerpt:

Nvidia is often considered a bellwether AI stock. That’s because much of its revenue comes from selling graphics processing units to power advanced AI systems, meaning that its success gives investors insight into the health of the sector overall. Currently, Nvidia’s stock-price-to-earnings ratio is in the 54 to 55 range, roughly twice the typical market average. That means the market expects great things from this stock. Those projections may or may not be validated, but it’s hard to conclude they’re entirely divorced from reality…

Keep in mind that the tech sector as a whole is still earning more than it is shelling out in capital expenditures. The current AI boom is being financed by earnings more than by new issuance of debt, which makes it less prone to a sudden crash. By one estimate, capital expenditures in Big Tech are about 94 percent of cash flow in 2025. You could imagine that number moving into unstable territory, but so far, the U.S. tech sector is managing to pay its bills without going into debt.

You may recall we are coming off a period when everyone complained that the big tech companies were sitting on trillions of dollars in cash and capital. Now, they are spending it, and complaints are heating up once again. Damned if they do, damned if they don’t.

In fact, what we are seeing right now is a shortage in the AI sector’s capacity to meet demand. Major tech companies are investing in more computing capacity, but they still cannot serve all the customers who want access to AI systems. That augurs well for the future of the sector, even if there are dips and spills along the way.

As usual, we will see, but if you are calling it a bubble after an initial price dip or corporate shake-out, that is exactly the fallacy you are not supposed to be slipping into.

Do (human) readers prefer AI writers?

It seems so, do read through the whole abstract:

The use of copyrighted books for training AI models has led to numerous lawsuits from authors concerned about AI’s ability to generate derivative content. Yet it’s unclear whether these models can generate high quality literary text while emulating authors’ styles/voices. To answer this we conducted a preregistered study comparing MFA-trained expert writers with three frontier AI models: ChatGPT, Claude, and Gemini in writing up to 450 word excerpts emulating 50 awardwinning authors’ (including Nobel laureates, Booker Prize winners, and young emerging National Book Award finalists) diverse styles. In blind pairwise evaluations by 159 representative expert (MFA-trained writers from top U.S. writing programs) and lay readers (recruited via Prolific), AI-generated text from in-context prompting was strongly disfavored by experts for both stylistic fidelity (odds ratio [OR]=0.16, p < 10^-8) and writing quality (OR=0.13, p< 10^-7) but showed mixed results with lay readers. However, fine-tuning ChatGPT on individual author’s complete works completely reversed these findings: experts now favored AI-generated text for stylistic fidelity (OR=8.16, p < 10^-13) and writing quality (OR=1.87, p=0.010), with lay readers showing similar shifts. These effects are robust under cluster-robust inference and generalize across authors and styles in author-level heterogeneity analyses. The fine-tuned outputs were rarely flagged as AI-generated (3% rate versus 97% for incontext prompting) by state-of-the-art AI detectors. Mediation analysis reveals this reversal occurs because fine-tuning eliminates detectable AI stylistic quirks (e.g., cliché density) that penalize incontext outputs, altering the relationship between AI detectability and reader preference. While we do not account for additional costs of human effort required to transform raw AI output into cohesive, publishable novel length prose, the median fine-tuning and inference cost of $81 per author represents a dramatic 99.7% reduction compared to typical professional writer compensation. Author-specific fine-tuning thus enables non-verbatim AI writing that readers prefer to expert human writing, thereby providing empirical evidence directly relevant to copyright’s fourth fair-use factor, the “effect upon the potential market or value” of the source works.

That is from a new paper by Tuhin Chakrabarty, Jane C. Ginsburg, and Paramveer Dhillon.  For the pointer I thank the excellent Kevin Lewis.  I recall an earlier piece showing that LLMs also prefer LLM outputs?

“Some Economics of Artificial Super Intelligence”

I promised to pass along serious models of pending AI doom, especially if they are peer-reviewed or at least headed for such.  The AI doomer types still are dropping the ball on this, but one economist has made a contribution and so here it is from Henry A. Thompson:

Conventional wisdom holds that a misaligned artificial superintelligence (ASI) will destroy humanity. But the problem of constraining a powerful agent is not new. I apply classic economic logic of interjurisdictional competition, all-encompassing interest, and trading on credit to the threat of misaligned ASI. Using a simple model, I show that an acquisitive ASI refrains from full predation under surprisingly weak conditions. When humans can flee to rivals, inter-ASI competition creates a market that tempers predation. When trapped by a monopolist ASI, its “encompassing interest” in humanity’s output makes it a rational autocrat rather than a ravager. And when the ASI has no long-term stake, our ability to withhold future output incentivizes it to trade on credit rather than steal. In each extension, humanity’s welfare progressively worsens. But each case suggests that catastrophe is not a foregone conclusion. The dismal science, ironically, offers an optimistic take on our superintelligent future.

You  may or may not agree, but as usual the point is to build out a literature, not to regard any single paper as the final word.  Via the excellent Joy Buchanan.

Waymo

Waymo now does highways in the Bay area.

Expanding our service territory in the Bay Area and introducing freeways is built on real-world performance and millions of miles logged on freeways, skillfully handling highway dynamics with our employees and guests in Phoenix, San Francisco, and Los Angeles. This experience, reinforced by comprehensive testing as well as extensive operational preparation, supports the delivery of a safe and reliable service.

The future is happening fast.

Solve for the NIMBY equilibrium?

We are just beginning to think these issues through:

The government’s plan to use artificial intelligence to accelerate planning for new homes may be about to hit an unexpected roadblock: AI-powered nimbyism.

A new service called Objector is offering “policy-backed objections in minutes” to people who are upset about planning applications near their homes.

It uses generative AI to scan planning applications and check for grounds for objection, ranking these as “high”, “medium” or “low” impact. It then automatically creates objection letters, AI-written speeches to deliver to the planning committees, and even AI-generated videos to “influence councillors”.

Kent residents Hannah and Paul George designed the system after estimating they spent hundreds of hours attempting to navigate the planning process when they opposed plans to convert a building near their home into a mosque.

Here is the full story.  Via Aaron K.

In defense of Schumpeter

Factories of Ideas? Big Business and the Golden Age of American Innovation (Job Market Paper) [PDF]

This paper studies the Great Merger Wave (GMW) of 1895-1904—the largest consolidation event in U.S. history—to identify how Big Business affected American innovation. Between 1880 and 1940, the U.S. experienced a golden age of breakthrough discoveries in chemistry, electronics, and telecommunications that established its technological leadership. Using newly constructed data linking firms, patents, and inventors, I show that consolidation substantially increased innovation. Among firms already innovating before the GMW, consolidation led to an increase of 6 patents and 0.6 breakthroughs per year—roughly four-fold and six-fold increases, respectively. Firms with no prior patents were more likely to begin innovating. The establishment of corporate R\&D laboratories served as a key mechanism driving these gains. Building a matched inventor–firm panel, I show that lab-owning firms enjoyed a productivity premium not due to inventor sorting, robust within size and technology classes. To assess whether firm-level effects translated into broader technological progress, I examine total patenting within technological domains. Overall, the GMW increased breakthroughs by 13% between 1905 and 1940, with the largest gains in science-based fields (30% increase).

That is the job market paper of Pier Paolo Creanza, who is on the market this year from Princeton.

My excellent Conversation with Sam Altman

Recorded live in Berkeley, at the Roots of Progress conference (an amazing event), here is the material with transcript, here is the episode summary:

Sam Altman makes his second appearance on the show to discuss how he’s managing OpenAI’s explosive growth, what he’s learned about hiring hardware people, what makes roon special, how far they are from an AI-driven replacement to Slack, what GPT-6 might enable for scientific research, when we’ll see entire divisions of companies run mostly by AI, what he looks for in hires to gauge their AI-resistance, how OpenAI is thinking about commerce, whether GPT-6 will write great poetry, why energy is the binding constraint to chip-building and where it’ll come from, his updated plan for how he’d revitalize St. Louis, why he’s not worried about teaching normies to use AI, what will happen to the price of healthcare and hosing, his evolving views on freedom of expression, why accidental AI persuasion worries him more than intentional takeover, the question he posed to the Dalai Lama about superintelligence, and more.

Excerpt:

COWEN: What is it about GPT-6 that makes that special to you?

ALTMAN: If GPT-3 was the first moment where you saw a glimmer of something that felt like the spiritual Turing test getting passed, GPT-5 is the first moment where you see a glimmer of AI doing new science. It’s very tiny things, but here and there someone’s posting like, “Oh, it figured this thing out,” or “Oh, it came up with this new idea,” or “Oh, it was a useful collaborator on this paper.” There is a chance that GPT-6 will be a GPT-3 to 4-like leap that happened for Turing test-like stuff for science, where 5 has these tiny glimmers and 6 can really do it.

COWEN: Let’s say I run a science lab, and I know GPT-6 is coming. What should I be doing now to prepare for that?

ALTMAN: It’s always a very hard question. Even if you know this thing is coming, if you adapt your —

COWEN: Let’s say I even had it now, right? What exactly would I do the next morning?

ALTMAN: I guess the first thing you would do is just type in the current research questions you’re struggling with, and maybe it’ll say, “Here’s an idea,” or “Run this experiment,” or “Go do this other thing.”

COWEN: If I’m thinking about restructuring an entire organization to have GPT-6 or 7 or whatever at the center of it, what is it I should be doing organizationally, rather than just having all my top people use it as add-ons to their current stock of knowledge?

ALTMAN: I’ve thought about this more for the context of companies than scientists, just because I understand that better. I think it’s a very important question. Right now, I have met some orgs that are really saying, “Okay, we’re going to adopt AI and let AI do this.” I’m very interested in this, because shame on me if OpenAI is not the first big company run by an AI CEO, right?

COWEN: Just parts of it. Not the whole thing.

ALTMAN: No, the whole thing.

COWEN: That’s very ambitious. Just the finance department, whatever.

ALTMAN: Well, but eventually it should get to the whole thing, right? So we can use this and then try to work backwards from that. I find this a very interesting thought experiment of what would have to happen for an AI CEO to be able to do a much better job of running OpenAI than me, which clearly will happen someday. How can we accelerate that? What’s in the way of that? I have found that to be a super useful thought experiment for how we design our org over time and what the other pieces and roadblocks will be. I assume someone running a science lab should try to think the same way, and they’ll come to different conclusions.

COWEN: How far off do you think it is that just, say, one division of OpenAI is 85 percent run by AIs?

ALTMAN: Any single division?

COWEN: Not a tiny, insignificant division, mostly run by the AIs.

ALTMAN: Some small single-digit number of years, not very far. When do you think I can be like, “Okay, Mr. AI CEO, you take over”?

Of course we discuss roon as well, not to mention life on the moons of Saturn…

Andrej and Dwarkesh as philosophy

If you follow AI at all, you probably do not need another recommendation of the Andrej Karpathy and Dwarkesh Patel podcast, linked to here:

I hardly ever listen to podcasts, but at almost two and a half hours I found this one worthwhile and that was at 1x (I don’t listen to podcasts at higher speed, not wanting to disrupt the drama of the personalities).  What struck me is how philosophical so many aspects of the discussion were.  Will this end up being the best “piece of philosophy” done this year?  Probably.  Neither participant of course is a trained philosopher, but neither were Plato or Kierkagaard.  They are both very focused on real issues however, and new issues at that.  And dialogue is hardly a disqualifying medium when it comes to philosphy.

Some guy on Twitter felt I was slighting this book in my tweet on the matter.  I’ll let history judge this one, as we’ll see which issues people are still talking about fifty years from now (note I said nothing against that book in my tweet, nor against contemporary philosophy, I just said this podcast was philosophical and very good).  I’ve made the point before (pre-LLM) that current academic philosophers are losing rather dramatically in the fight for intellectual influence, and perhaps more of a serious engagement with these issues would help.  I’ve seen plenty of philosophical work on AI, but none of it yet seems to be interesting.  For that you have to go to the practitioners and the Bay Area obsessives.

The American economy is showing its flexibility

That is the theme of my recent Free Press column on AI.  Excerpt:

The more quickly the world changes, the more the quality of our capital markets matters. And the world is indeed changing quickly: AI will soon be present in virtually every job. Many of us already use it informally for legal and medical advice, research, and even companionship.

And to close:

From the point of view of an investor, it matters whether or not we’re in an AI bubble. But if you are seeking to understand long-term social and economic trends, the bubble question is primarily a matter of short-term interest and timing. It will not decide where the economy is headed long-term.

Instead, what we are seeing is that America, at the drop of a hat, can turn on a dime and reallocate capital on an unparalleled scale, to our great and enduring benefit. Unless you were around to witness World War II, none of us have seen anything like this before. Do not expect the ride to be smooth or predictable, but feel free to sit back and enjoy: This is history in the making.

By some estimates, at least three-quarters of the world’s compute is in the United States.