Category: Web/Tech

My contentious Conversation with Jonathan Haidt

Here is the transcript, audio, and video.  Here is the episode summary:

But might technological advances and good old human resilience allow kids to adapt more easily than he thinks?

Jonathan joined Tyler to discuss this question and more, including whether left-wingers or right-wingers make for better parents, the wisest person Jonathan has interacted with, psychological traits as a source of identitarianism, whether AI will solve the screen time problem, why school closures didn’t seem to affect the well-being of young people, whether the mood shift since 2012 is not just about social media use, the benefits of the broader internet vs. social media, the four norms to solve the biggest collective action problems with smartphone use, the feasibility of age-gating social media, and more.

It is a very different tone than most CWTs, most of all when we get to social media.  Here is one excerpt:

COWEN: There are two pieces of evidence — when I look at them, they don’t seem to support your story out of sample.

HAIDT: Okay, great. Let’s have it.

COWEN: First, across countries, it’s mostly the Anglosphere and the Nordic countries, which are more or less part of the Anglosphere. Most of the world is immune to this, and smartphones for them seem fine. Why isn’t it just that a negative mood came upon the Anglosphere for reasons we mostly don’t understand, and it didn’t come upon most of the rest of the world? If we’re differentiating my hypothesis from yours, doesn’t that favor my view?

HAIDT: Well, once you look into the connections and the timing, I would say no. I think I see what you’re saying now, but I think your view would say, “Just for some reason we don’t know, things changed around 2012.” Whereas I’m going to say, “Okay, things changed around 2012 in all these countries. We see it in the mental illness rates, especially of the girls.” I’m going to say it’s not just some mood thing. It’s like (a), why is it especially the girls? (b) —

COWEN: They’re more mimetic, right?

HAIDT: Yes, that’s true.

COWEN: Girls are more mimetic in general.

HAIDT: That’s right. That’s part of it. You’re right, that’s part of it. They’re just much more open to connection. They’re more influenced. They’re more subject to contagion. That is a big part of it, you’re right. What Zach Rausch and I have found — he’s my lead researcher at the After Babel Substack. I hope people will sign up. It’s free. We’ve been putting out tons of research. Zach has really tracked down what happened internationally, and I can lay it out.

Now I know the answer. I didn’t know it two months ago. The answer is, within countries, as I said, it’s the people who are conservative and religious who are protected, and the others, the kids get washed out to sea. Psychologically, they feel their life has no meaning. They get more depressed. Zach has looked across countries, and what you find in Europe is that, overall, the kids are getting a little worse off psychologically.

But that hides the fact that in Eastern Europe, which is getting more religious, the kids are actually healthier now than they were 10 years ago, 15 years ago. Whereas in Catholic Europe, they’re a little worse, and in Protestant Europe, they’re much worse.

It doesn’t seem to me like, oh, New Zealand and Iceland were talking to each other, and the kids were sharing memes. It’s rather, everyone in the developed world, even in Eastern Europe, everyone — their kids are on phones, but the penetration, the intensity, was faster in the richest countries, the Anglos and the Scandinavians. That’s where people had the most independence and individualism, which was pretty conducive to happiness before the smartphone. But it now meant that these are the kids who get washed away when you get that rapid conversion to the phone-based childhood around 2012. What’s wrong with that explanation?

COWEN: Old Americans also seem grumpier to me. Maybe that’s cable TV, but it’s not that they’re on their phones all the time. And you know all these studies. If you try to assess what percentage of the variation in happiness of young people is caused by smartphone usage — Sabine Hossenfelder had a recent video on this — those numbers are very, very, very small. That’s another measurement that seems to discriminate in favor of my theory, exogenous mood shifts, rather than your theory. Why not?

Very interesting throughout, recommended.  And do not forget that Jon’s argument is outlined in detail in his new book, titled The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness.

Will strong AI raise or lower interest rates?

That is the topic of my latest Bloomberg column.  Here is one excerpt:

First, as a matter of practice, if there is a true AI boom, or the advent of artificial general intelligence (AGI), the demand for capital expenditures (capex) will be extremely high. Second, as a matter of theory, the productivity of capital is a major factor in shaping real interest rates. If capital productivity rises significantly due to AI, real interest rates ought to rise as well.

Think about capex in a world of AI. The scurry to produce more high-quality semiconductor chips will continue. Those investments are not easy or cheap. But the demand for investment will not stop there. The more that AI is integrated into lives and business plans, the higher will be the demand for computation. That will induce a significant expansion of energy infrastructure.

Again, those are not cheap investments. Northern Virginia, for example, is now facing a major dilemma along these lines, and not only because of AI. The region is home to major data centers, and now needs the equivalent of several large nuclear power plants to meet projected energy demands.

And that could be just the beginning of the rise in capex. AI is already driving some advances in the pace of scientific discovery, a trend that can be expected to continue. Imagine, for instance, if AI made water desalination cost-effective in many parts of the world. All of a sudden there would be more demand to develop more parts of California, Arizona and Nevada. The US would build more real estate, using more energy in the process. Saudi Arabia, the UAE and many other places might do the same, boosting overall demand for investment yet higher.

Demand for space travel and satellite launches seems to be rising as well, partly because of AI. Software innovation is driving a lot of progress on the hardware side. Less optimistically, AI-driven warfare and drone combat may rise in importance, as already is true in Ukraine and the Middle East. This is bad news that will nevertheless drive further investment.

Note that in the longer run:

Still, it makes sense to be prepared for a reversal of the long-run trend of falling real interest rates — at least for several decades, until AI-driven progress creates more wealth to replenish stocks of savings, lowering real rates once again.

The most interesting general question is, if strong AI really is taking off, what is the best way of earning money from that reality?  Please apply the theory of tax incidence to any and all possible answers.

Strong AI and the O-Ring model

Let’s say the Sumerians were gifted strong AI, simply as an exogenous shock to a historical model.  Could they put it to much use?  Electricity would be one immediate problem, but not the only problem.

Or give strong AI to a caveman.

Thomas Edison had electricity, but how much could he do with strong AI?  Lord Asquith?  Adlai Stevenson?

Where exactly are we in this historical sequence?

My review of Suno, AI-generated music

Try it here, click on the right on the mention of making full, two-minute songs and use the Explore tab.  To me it is remarkable the resulting AI-generated music is as good as it is.  But it still isn’t anything I would listen to, other than out of curiosity.  It is best at edm, standardized genres such as routine heavy metal, and certain ethnic musics, especially if “the affect” can be created by methods of layering.  Its weakness is an ability to generate the simple, memorable melody, a’la Sir Paul or the other Paul namely Paul Simon.  For my taste there is “not enough music in the music.”  Suno cannot yet create the ineffable something, which is what I listen to music for.

That said, it is not worse than what most people listen to.  It remains to be seen at what pace progress will be made, or whether current approaches, extrapolated to allow for further improvement, can get us to real music, rather than stuff that sounds like music.

Is an Economic Growth Explosion Imminent?

On the road, I haven’t had a chance to read this paper yet, but I pass it along as a matter of interest:

Theory predicts that global economic growth will stagnate and even come to an end due to slower and eventually negative growth in population. It has been claimed, however, that Artificial Intelligence (AI) may counter this and even cause an economic growth explosion. In this paper, we critically analyse this claim. We clarify how AI affects the ideas production function (IPF) and propose three models relating innovation, AI and population: AI as a research-augmenting technology; AI as researcher scale enhancing technology; and AI as a facilitator of innovation. We show, performing model simulations calibrated on USA data, that AI on its own may not be sufficient to accelerate the growth rate of ideas production indefinitely. Overall, our simulations suggests that an economic growth explosion would only be possible under very specific and perhaps unlikely combinations of parameter values. Hence we conclude that it is not imminent.

That is from Derick Almeida, Wim Naudé, and Tiago Sequeira.

Lawyering in the Age of Artificial Intelligence

We conducted the first randomized controlled trial to study the effect of AI assistance on human legal analysis. We randomly assigned law school students to complete realistic legal tasks either with or without the assistance of GPT-4. We tracked how long the students took on each task and blind-graded the results. We found that access to GPT-4 only slightly and inconsistently improved the quality of participants’ legal analysis but induced large and consistent increases in speed. AI assistance improved the quality of output unevenly—where it was useful at all, the lowest-skilled participants saw the largest improvements. On the other hand, AI assistance saved participants roughly the same amount of time regardless of their baseline speed. In follow up surveys, participants reported increased satisfaction from using AI to complete legal tasks and correctly predicted the tasks for which GPT-4 were most helpful. These results have important descriptive and normative implications for the future of lawyering. Descriptively, they suggest that AI assistance can significantly improve productivity and satisfaction, and that they can be selectively employed by lawyers in areas where they are most useful. Because these tools have an equalizing effect on performance, they may also promote equality in a famously unequal profession. Normatively, our findings suggest that law schools, lawyers, judges, and clients should affirmatively embrace AI tools and plan for a future in which they will become widespread.

That is by Jonathan H. Choi, Amy Monahan, and Daniel Schwarcz, forthcoming in the Minnesota Law Review.  Via the excellent Kevin Lewis.

LDS principles for AI

Knowing that the proper use of AI will help the Church accomplish God’s work of salvation and exaltation, the Church has issued the following guiding principles for using AI. These were introduced to employees of The Church of Jesus Christ of Latter-day Saints worldwide on Wednesday, March 13, 2024, by Elder Gerrit W. Gong of the Quorum of the Twelve Apostles (co-chair of the Church Communication Committee) and Elder John C. Pingree of the Seventy (executive director of the Correlation Department).

Here is the full link, better than most of what is done in this area.  For instance:

  • The Church will use artificial intelligence to support and not supplant connection between God and His children.
  • The Church will use artificial intelligence in positive, helpful, and uplifting ways that maintain the honesty, integrity, ethics, values, and standards of the Church…
  • The Church’s use of artificial intelligence will safeguard sacred and personal information.

Worth a ponder.  Via Tyler Ransom.

Scenarios for the Transition to AGI

By Anton Korinek and Donghyun Suh, in a new NBER working paper:

We analyze how output and wages behave under different scenarios for technological progress that may culminate in Artificial General Intelligence (AGI), defined as the ability of AI systems to perform all tasks that humans can perform. We assume that human work can be decomposed into atomistic tasks that differ in their complexity. Advances in technology make ever more complex tasks amenable to automation. The effects on wages depend on a race between automation and capital accumulation. If automation proceeds sufficiently slowly, then there is always enough work for humans, and wages may rise forever. By contrast, if the complexity of tasks that humans can perform is bounded and full automation is reached, then wages collapse. But declines may occur even before if large-scale automation outpaces capital accumulation and makes labor too abundant. Automating productivity growth may lead to broad-based gains in the returns to all factors. By contrast, bottlenecks to growth from irreproducible scarce factors may exacerbate the decline in wages.

The best paper on these topics so far?  And here is a recent Noah Smith piece on employment as AI proceeds.  And a recent Belle Lin WSJ piece, via Frank Gullo, “Tech Job Seekers Without AI Skills Face a New Reality: Lower Salaries and Fewer Roles.”  And here is a proposal for free journalism school for everybody (NYT, okie-dokie!).

Gatekeeping is Apple’s Brand Promise

Steve Sinofsky, former president of Microsoft’s Windows division and now a VC, has an excellent deep dive on the EU’s Digital Markets Act (DMA). The Act is very squarely aimed at Apple, despite the fact that Apple is not a monopoly and has a significantly smaller share of the phone market than Android. Apple’s history is well known, in contrast with Microsoft it went for a closed system in which Apple controlled entry to a much greater extent. The same was true with iPhone versus Android.

iPhone was successful but it was not as successful as Android that came shortly after because of the constraints Steve put in place to be the best, not the highest share or the greatest number of units. Android was to smartphones just as Microsoft was to personal computers. Android sought out the highest share, greatest variety of hardware at the lowest prices, and most open platform for both phone makers and developers. By making Android open source, Google even out-Microsofted Microsoft by providing what hardware makers had always wanted—complete control. A lot more manufacturers, people, and companies appreciated that approach more than Apple’s. That’s why something like 7 out of 10 smartphones in the world run Android.

Android has the kind of success Microsoft would envy, but not Apple, primarily because with that success came most all the same issues that Microsoft sees (still) with the Windows PC. The security, privacy, abuse, fragility, and other problems of the PC show up on Android at a rate like the PC compared to Macintosh and iPhone. Only this time it is not the lack of motivation bad actors have to exploit iPhone, rather it is the foresight of the Steve Jobs vision for computing. He pushed to have a new kind of computer that further encapsulated and abstracted the computer to make it safer, more reliable, more private, and secure, great battery life, more accessible, more consistent, always easier to use, and so on. These attributes did not happen by accident. They were the process of design and architecture from the very start. These attributes are the brand promise of iPhone as much as the brand promise of Android is openness, ubiquity, low price, choice.

The lesson of the first two decades of the PC and the first almost two decades of smartphones are that these ends of a spectrum are not accidental. These choices are not mutually compatible. You don’t get both. I know this is horrible to say and everyone believes that there is somehow malicious intent to lock people into a closed environment or an unintentional incompetence that permits bad software to invade an ecosystem. Neither of those would be the case. Quite simply, there’s a choice between engineering and architecting for one or the other and once you start you can’t go back. More importantly, the market values and demands both.

That is unless you’re a regulator in Brussels. Then you sit in an amazing government building and decide that it is entirely possible to just by fiat declare that the iPhone should have all the attributes of openness.

Apple’s promise to iPhone users is that it will be a gatekeeper. Gatekeeping is what allows Apple to promise greater security, privacy, usability and reliability. Gatekeeping is Apple’s brand promise. Gatekeeping is what the consumer’s are buying. The EU’s DMA is an attempt to make Apple more “open” but it can only do so at the expense of turning Apple into Android, devaluating the brand promise and ironically reducing competition.

Read the whole thing for more details and history including useful comparisons with the US antitrust trial against Microsoft.

Austin Vernon on drones and defense (from my email)

I think they still favor the defensive. On the front line they make movement, hence offense, very difficult.

In the strategic sense we’ve already seen Ukraine adjust to the propeller drone/cruise missile attacks. The first few months were terrible for them but then they organized a defense system with the mobile anti drone teams. The interception percentage for drones traveling a fair distance over Ukraine is extremely high, 98% type numbers. Most of the Russian focus in now on more “front line” targets like Odessa because the Ukrainians don’t have as much time and space to make the interception. They are downing maybe 60%-70% of those drones.

The Russians are slow to adapt, but they eventually do. There is no reason to believe they won’t get better at intercepting these slow drones. Expensive cruise missiles with high success rates can end up being a better deal when strategic drones have 98% loss rates. The slow drones are better suited for near front line attacks. It also wouldn’t surprise me if they adapted to be more expensive to add features like quiet engines, thermal signature obfuscation, and lower radar cross sections.

I also think it’s worth pointing out that the Houthis have tried unmanned surface vehicles and they’ve all been quickly destroyed. Same with their slower drones. The hardest weapons to defend against have been conventional anti ship missiles and the newer ballistic anti ship missiles. You can argue about the intercepting missiles being too expensive, but the US is moving towards using more APKWS guided rockets against these strategic drone targets. These only cost $30,000 each and we already procure tens of thousands of them each year. The adaptation game is ongoing but the short range FPV drones seem quite durable while the strategic slow speed drone impact looks less sustainable.

Here is my original post.

Marc Andreessen and I talk AI at an a16z American Dynamism event

a16z has issued the talks from that event, and we are issuing it too, as a bonus episode of CWT.  But note it is shorter than usual, and not the typical CWT format — this was done for an audience of actual DC human beings!

Excerpt:

COWEN: Why is open-source AI in particular important for national security?

ANDREESSEN: For a whole bunch of reasons. One is, it is really hard to do security without open source. There are actually two schools of thought on information security, computer security broadly, that have played out over the last 50 years. There was one school of security that says you want to basically hide the source code, and you want to hide the source code precisely. This seems intuitive because, presumably, you want to hide the source code so that bad guys can’t find the flaws in it, right? Presumably, that would be the safe way to do things.

Then over the course of the last 30 or 40 years, basically, what’s evolved is the realization in the field (and I think very broadly) that actually, that’s a mistake. In the software field, we call that “security through obscurity,” right? We hide the code. People can’t exploit it. The problem, of course, is: okay, but that means the flaws are still in there, right?

If anybody actually gets to the code, they just basically have a complete index of all the problems. There’s a whole bunch of ways for people to get the code. They hack in. It’s actually very easy to steal software code from a company. You hire the janitorial staff to stick a USB stick into a machine at 3:00 in the morning. Software companies are very easily penetrated. It turned out, security through obscurity was a very bad way to do it. The much more secure way to do it is actually open source.

Basically, put the code in public and then basically build the code in such a way that when it runs, it doesn’t matter whether somebody has access to the code. It’s still fully secure, and then you just have a lot more eyes on the code to discover the problems. In general, open source has turned out to be much more secure. I would start there. If we want secure systems, I think this is what we have to do.

Marc is always in top form.

Claims about compute

Hard for me to judge this one, but do not underestimate elasticity of supply.  And by the way, initial reports on Devin are very positive.

TikTok divestiture

I’ve blogged this in the past, and don’t have much to add to my previous views.  I will say this, however: if TikTok truly is breaking laws on a major scale, let us start a legal case with fact-finding and an adversarial process.  Surely such a path would uncover the wrongdoing under consideration, or at least strongly hint at it.  Alternately, how about some research, such as say RCTs, showing the extreme reach and harmful influence of TikTok?  Is that asking for too much?

Now maybe all that has been done and I am just not aware of it.  Alternatively, perhaps this is another of those bipartisan rushes to judgment that we are likely to regret in the longer run.  In which case this would be filed under “too important to be left to legal fact-finding and science,” a class of issues which is sadly already too large.