My new podcast with Dwarkesh Patel

We discussed how the insights of Hayek, Keynes, Smith, and other great economists help us make sense of AI, growth, risk, human nature, anarchy, central planning, and much more.

Dwarkesh is one of the very best interviewers around, here are the links.  If Twitter is blocked to you, here is the transcript, here is Spotify, among others.  Here is the most salacious part of the exchange, highly atypical of course:

Dwarkesh Patel 00:17:16

If Keynes were alive today, what are the odds that he’s in a polycule in Berkeley, writing the best written LessWrong post you’ve ever seen?

Tyler Cowen 00:17:24

I’m not sure what the counterfactual means. Keynes is so British. Maybe he’s an effective altruist at Cambridge. Given how he seemed to have run his sex life, I don’t think he needed a polycule. A polycule is almost a Williamsonian device to economize on transaction costs. But Keynes, according to his own notes, seems to have done things on a very casual basis.

And on another topic:

Dwarkesh Patel 00:36:44

We’re talking, I guess, about like GPT five level models. When you think in your mind about like, okay, this is GPT five. What happens with GPT six, GPT seven. Do you see it? Do you still think in the frame of having a bunch of RAs, or does it seem like a different sort of thing at some point?

Tyler Cowen 00:36:59

I’m not sure what those numbers going up mean, what a GPT seven would look like, or how much smarter it could get. I think people make too many assumptions there. It could be the real advantages are integrating it into workflows by things that are not better GPTs at all. And once you get to GPT, say, 5.5, I’m not sure you can just turn up the dial on smarts and have it, like, integrate general relativity and quantum mechanics.

Dwarkesh Patel 00:37:26

Why not?

Tyler Cowen 00:37:27

I don’t think that’s how intelligence works. And this is a Hayekian point. And some of these problems, there just may be no answer. Like, maybe the universe isn’t that legible, and if it’s not that legible, the GPT eleven doesn’t really make sense as a creature or whatever.

Dwarkesh Patel 00:37:44

Isn’t there a Hayekian argument to be made that, listen, you can have billions of copies of these things? Imagine the sort of decentralized order that could result, the amount of decentralized tacit knowledge that billions of copies talking to each other could have. That in and of itself, is an argument to be made about the whole thing as an emergent order will be much more powerful than we were anticipating.

Tyler Cowen 00:38:04

Well, I think it will be highly productive. What “tacit knowledge” means with AIs, I don’t think we understand yet. Is it by definition all non-tacit? Or does the fact that how GPT-4 works is not legible to us or even its creators so much? Does that mean it’s possessing of tacit knowledge, or is it not knowledge? None of those categories are well thought out, in my opinion. So we need to restructure our whole discourse about tacit knowledge in some new, different way. But I agree, these networks of AIs, even before, like, GPT-11, they’re going to be super productive, but they’re still going to face bottlenecks, right? And I don’t know how good they’ll be at, say, overcoming the behavioral bottlenecks of actual human beings, the bottlenecks of the law and regulation. And we’re going to have more regulation as we have more AIs.

You will note I corrected the AI transcriber on some minor matters.  In any case, self-recommending, and here is the YouTube embed:

Comments

Comments for this post are closed