Holden Karnofsky emails me on transformative AI

Here is Holden, our discussion started with this post of mine, for his words I will use quotation marks rather than dealing with double indentation:

“…debates about specifics between climate scientists get incredibly intricate (and are often very sensitive to parameters we just can’t reasonably estimate), and if you tried to get oriented to climate science by reading one it would be a nightmare, but this doesn’t mean the big-picture ways in which climatologists diverge from conventional wisdom should be discounted.

I think the broad-brush picture here is a better starting point than an exchange between Eliezer, Ajeya, me and Scott.

Even shorter version:

  • You can run the bio anchors analysis in a lot of different ways, but they all point to transformative AI this century;
  • As do the expert surveys, as does Metaculus;
  • Eliezer’s argument is that he thinks it will be sooner;
  • The most naive extrapolations of economic growth trends imply singularity (or at least “new growth mode”) this century;
  • Other angles of analysis (including the very-outside-view semi-informative priors) are basically about rebutting the idea that there’s a giant burden of proof here.
  • Specific arguments for “later than 2100,” including outside-view arguments, seem reasonably close to nonexistent; Robin Hanson has a (unconvincing IMO) case for synthetic AI taking longer, but Robin is also forecasting transformative AI of a sort (ems, which he says will lead to an explosion in economic growth and a relatively quick transition to something even stranger) this century.

So I ultimately don’t see how you get under P=1/3 or so for this century, and if you are way under P=1/3, I’d be interested if there were any more you could say about why (though recognize forecasts can’t always totally be explained).

P=1/3 would put “transformative AI this century” within 2x of “nuclear war this century,” and I think the average “nuclear war” is way less likely (like at least 10x) to have super-long-run impacts than the average “transformative AI is developed.”

That’s my basic thinking! It’s based on numerous angles and is not very sensitive to specific takes on the rate at which FLOPs get cheaper, although at some point I hope we can nail that parameter down better via prediction markets or something of the sort. Prediction markets on transformative AI itself are going to be harder, but I’m hopeful about that too. I think a very fast transition is plausible, so it could be very bad news if folks like you continue thinking it’s a remote possibility until it’s obviously upon us. (In my analogy, today might be like early January was for COVID. We don’t know enough to be sure, but we know enough to be highly alert, and we won’t necessarily be sure very long before it’s too late.)”

End of Holden, now back to TC.  And here is Holden’s “most important century” page.  That is our century, people!  This is all a bit of a follow-up on an in-person dialogue we had, but I will give him the last word (for now).

Comments

Comments for this post are closed