Existential risk, AI, and the inevitable turn in human history

In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least.  For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.

Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.

In other words, virtually all of us have been living in a bubble “outside of history.”

Now, circa 2023, at least one of those assumptions is going to unravel, namely #2.  AI represents a truly major, transformational technological advance.  Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.

#1 might unravel soon as well, depending how Ukraine and Taiwan fare.  It is fair to say we don’t know, nonetheless #1 also is under increasing strain.

Hardly anyone you know, including yourself, is prepared to live in actual “moving” history.  It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad.  In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.

I am reminded of the advent of the printing press, after Gutenberg.  Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits.  But it also created writings by Lenin, Hitler, and Mao’s Red Book.  It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history.  How about the Wars of Religion and the bloody 17th century to boot?  Still, if you were redoing world history you would take the printing press in a heartbeat.  Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?

But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum.  We don’t know how to respond psychologically, or for that matter substantively.  And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer).  No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.  No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.  No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).  No one.  Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

How well did people predict the final impacts of the printing press?  How well did people predict the final impacts of fire?  We even have an expression “playing with fire.”  Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response.  Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely.  Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them.  I have even funded some of this work through Emergent Ventures.

I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern.  No one can foresee those futures!  Once you keep up the arguing, you also are talking yourself into an illusion of predictability.  Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become.  It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison.  The case for destruction is so much more readily articulable — “boom!”  Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns.  (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)  Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagineAll the possibilities are distant, I cannot stress that enough.  The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances.  “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.

I would put it this way.  Our previous stasis, as represented by my #1 and #2, is going to end anyway.  We are going to face that radical uncertainty anyway.  And probably pretty soon.  So there is no “ongoing stasis” option on the table.

I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?”  And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.

With AI, do we get positives?  Absolutely, there can be immense benefits from making intelligence more freely available.  It also can help us deal with other existential risks.  Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders.  And should we wait, and get a “more Chinese” version of the alignment problem?  I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table.  We can’t even resurrect WTO or make the UN work or stop the Ukraine war.

Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?

So we should take the plunge.  If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this.  Don’t be suckered into taking their bait.  The longer a historical perspective you take, the more obvious this point will be.  We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge.

See you all on the other side.

Comments

Comments for this post are closed