Category: Web/Tech

A brief observation on AGI risk and employee selection (from my email)

  • Stunting growth now in the development of artificial intelligence just makes the probability of a bad future outcome more likely, as the people who are prosocial and thoughtful are more likely to be discouraged from the field if we attach a stigma to it. My view is that most people are good and care about others and our collective future. We need to maintain this ratio of “good people” in AI research. We can’t have this become the domain of malevolent actors. It’s too important for humanity.

That is from Ben R.

Existential risk, AI, and the inevitable turn in human history

In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least.  For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.

Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.

In other words, virtually all of us have been living in a bubble “outside of history.”

Now, circa 2023, at least one of those assumptions is going to unravel, namely #2.  AI represents a truly major, transformational technological advance.  Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.

#1 might unravel soon as well, depending how Ukraine and Taiwan fare.  It is fair to say we don’t know, nonetheless #1 also is under increasing strain.

Hardly anyone you know, including yourself, is prepared to live in actual “moving” history.  It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad.  In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.

I am reminded of the advent of the printing press, after Gutenberg.  Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits.  But it also created writings by Lenin, Hitler, and Mao’s Red Book.  It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history.  How about the Wars of Religion and the bloody 17th century to boot?  Still, if you were redoing world history you would take the printing press in a heartbeat.  Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?

But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum.  We don’t know how to respond psychologically, or for that matter substantively.  And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer).  No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.  No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.  No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).  No one.  Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

How well did people predict the final impacts of the printing press?  How well did people predict the final impacts of fire?  We even have an expression “playing with fire.”  Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response.  Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely.  Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them.  I have even funded some of this work through Emergent Ventures.

I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern.  No one can foresee those futures!  Once you keep up the arguing, you also are talking yourself into an illusion of predictability.  Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become.  It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison.  The case for destruction is so much more readily articulable — “boom!”  Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns.  (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)  Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagineAll the possibilities are distant, I cannot stress that enough.  The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances.  “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.

I would put it this way.  Our previous stasis, as represented by my #1 and #2, is going to end anyway.  We are going to face that radical uncertainty anyway.  And probably pretty soon.  So there is no “ongoing stasis” option on the table.

I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?”  And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.

With AI, do we get positives?  Absolutely, there can be immense benefits from making intelligence more freely available.  It also can help us deal with other existential risks.  Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders.  And should we wait, and get a “more Chinese” version of the alignment problem?  I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table.  We can’t even resurrect WTO or make the UN work or stop the Ukraine war.

Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?

So we should take the plunge.  If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this.  Don’t be suckered into taking their bait.  The longer a historical perspective you take, the more obvious this point will be.  We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge.

See you all on the other side.

New Emergent Ventures winners, 25th cohort

Duncan McClements, 17, incoming at King’s College Cambridge, economics, general career and research support.

Jasmine Wang and team (Jasmine is a repeat winner), Trellis, AI and the book.

Sophia Brown, Berlin/Brooklyn, to study the State Department,and general career development.

Robert Tolan, western Ireland, farmer and math Olympiad winner, YIMBY by street for Ireland.

Conor Durkin, Chicago, to write a Chicago city Substack.

Guido Putignano, Milan/Zurich, to do a summer internship in computation bio for cell therapies, at Harvard/MIT.

Michelle K. Huang, to revitalize Japanese real estate and to enable a creative community in Japan, near Kyoto.

Rasheed Griffith, repeat winner, to found a Caribbean think tank.

The Fitzwilliam, a periodical of ideas, Ireland.  To expand and built it out, Fergus McCullough and Sam Enright, both repeat winners.

Lyn Stoler, Los Angeles, general career development and to develop material for a new pro-growth, pro-green agenda for states and localities.

Gwen Lester, Chicago, to develop a center for abused, battered, and sexually abused women, namely GLC Empowerment Center, also known as Nana’s House.

Sabrina Singh, Ontario, pre-college, to help her study of neurotechnology.

And Emergent Ventures Ukraine:

Isa Hasenko, eastern Ukraine, medical care for eastern Ukraine, performed by a system of digital information, using a real-time tracking system, to trace every allocation.  He works with Fintable.io and MissionKharkiv.com.

Stephan Hosedlo, Lviv, to expand his company selling farm products and herbal products, and to buy a tractor.

Olesya Drashkaba, Kyiv, Sunseed Art, a company to market Ukrainian art posters around the world.

Peter Chernyshov, Edinburgh, mathematician, to run math education project — Kontora Pi — to teach advanced math for talented kids and school teachers in Ukraine.  To produce more math videos and to recruit more teachers around Ukraine.

Andrew Solovei, western Ukraine, to build out a network to compensate small scale Ukrainian volunteers in a scalable and verifiable manner.

Olena Skyrta, Kyiv, to start a for-profit that will tie new scientific innovations to Ukrainian and other businesses.

Yevheniia Vidishcheva, Kyiv, theatrical project to travel around Ukraine.

Alina Beskrovna, Mariupol and Harvard Kennedy School, general career support and to work on the economic reconstruction of Ukraine.

Are social media making us miserable?

Stuart Richie rebuts some of the recent studies:

And here’s the thing: when the authors of the “Facebook arrival” study raised their standards in this way, running a correction for multiple comparisons, all the results they found for well-being were no longer statistically significant. That is, a somewhat more conservative way of looking at the data indicated that every result they found was statistically indistinguishable from a scenario where Facebook had no effect on well-being whatsoever.

Now let’s turn to the second study, which was a randomised controlled trial where 1,637 adults were randomly assigned to shut down their Facebook account for four weeks, or go on using it as normal. Let’s call it the “deactivating Facebook” study. This “famous” study has been described as “the most impressive by far” in this area, and was the only study cited in the Financial Times as an example of the “growing body of research showing that reducing time on social media improves mental health”.

The bottom-line result was that leaving Facebook for a month led to higher well-being, as measured on a questionnaire at the end of the month. But again, looking in a bit more detail raises some important questions.

First, the deactivation happened in the weeks leading up to the 2018 US midterm elections. This was quite deliberate, because the researchers also wanted to look at how Facebook affected people’s political polarisation. But it does mean that the results they found might not apply to deactivating Facebook at other, less fractious times – maybe it’s particularly good to be away from Facebook during an election, when you can avoid hearing other people’s daft political opinions.

Second, just like the other Facebook study, the researchers tested a lot of hypotheses – and again, they used a correction to reduce false-positives. This time, the results weren’t wiped out entirely – but almost. Of the four questionnaire items that showed statistically-significant results before the correction, only one – “how lonely are you?” – remained significant after correction.

It’s debatable whether even this result would survive the researchers corrected for all the other statistical tests they ran. Not only that, but they also ran a second model, controlling for the overall amount of time people used Facebook, and this found even fewer results than the first one.

Third, as well as the well-being questionnaire at the end of the study, the participants got daily text messages asking them how happy they were, among other questions. Oddly, these showed absolutely no effect of being off Facebook – and not even the slightest hint of a trend in that direction.

Here is the entire piece, which is well thought out.

What should I ask Kevin Kelly?

From Wikipedia:

Kevin Kelly (born 1952) is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Review. He has also been a writer, photographer, conservationist, and student of Asian and digital culture

Among Kelly’s personal involvements is a campaign to make a full inventory of all living species on earth, an effort also known as the Linnaean enterprise. He is also sequencing his genome and co-organizes the Bay Area Quantified Self Meetup Group.

His Out of Control is a wonderful Hayekian book.  His three-volume Vanishing Asia is one of the greatest picture books of all time.  His new book (I haven’t read it yet) is Excellent Advice for Living: Wisdom I Wish I’d Known Earlier.  Here is Kevin on Twitter, here is his home page.

I will be doing a Conversation with him, so what should I ask?

What is the single best way of improving your GPT prompts?

I have a nomination, and here is an excerpt from my new paper with Alex:

You often can get a better and more specific answer by asking for an answer in the voice of another person, a third party. Here goes: What are the causes of inflation, as it might be explained by Nobel Laureate Milton Friedman?

By mentioning Friedman you are directing the GPT to look at a more intelligent segment of the potential answer space and this directing will usually get you a better answer than if you just ask “What are the causes of inflation?” Similarly, you want all of the words used in your query to be intelligent-sounding. Of course, you may not agree with the views of Friedman on inflation. Here are a few economists who are well known and have written a lot on a wide variety of issues:

Paul Samuelson
Milton Friedman
Susan Athey
Paul Krugman
Tyler Cowen
Alex Tabarrok

But you don’t have to memorize that list, and it is not long enough anyway. When in doubt, ask GPT itself who might be the relevant experts. How about this?: “I have a question on international trade. Which economists in the last thirty years might be the smartest experts on such questions?” The model will be very happy to tell you, and then you can proceed with your further queries.

Of course this advice generalizes far beyond economics.  A friend of mine queried GPT-4 about Jon Fosse, a Norwegian author, and received a wrong answer.  He retried the same question, but asking also for an answer from a Fosse expert.  The response was then very good.

The title of my paper with Alex is “How to Learn and Teach Economics with Large Language Models, Including GPT,” but again most of the advice is generalizable to education with GPTs more generally.  Recommended, the paper is full of tips for using GPT models in more effective ways.

Imagine if humanity ends up divided into two classes of people: those who are willing/not embarrassed to tack on extra “silly bits” to their prompts, and those who are not so willing.  The differences in capabilities will end up being remarkable.  Are perhaps many elites and academics unwilling to go the extra mile in their prompts?  Do they feel a single sentence question ought to be enough?  Are they in any case constitutionally unused to providing extra context for their requests?

Time will tell.

Lots of announcements from Glen Weyl

Teaching and Learning Economics with the AIs

Tyler and I have a new paper, How to Learn and Teach Economics with Large Language Models, Including GPT:

GPTs, such as ChatGPT and Bing Chat, are capable of answering economics questions, solving specific economic models, creating exams, assisting with research, generating ideas, and enhancing writing, among other tasks. This paper highlights how these innovative tools differ from prior software and necessitate novel methods of interaction. By providing examples, tips, and guidance, we aim to optimize the use of GPTs and LLMs for learning and teaching economics effectively.

Most of the paper is about how to use GPTs effectively but we also make some substantive points that many people are missing:

GPTs are not simply a chatty front end to the internet. Some GPTs like ChatGPT have no ability to search the internet. Others, like Bing Chat, can search the internet and might do so to aid in answering a question, but that is not fundamentally how they work. It is possible to ask a GPT questions that no one has ever asked before. For example, we asked how Fred Flintstone was like Hamlet, and ChatGPT responded (in part):

Fred Flintstone and Hamlet are two vastly different characters from different time periods, cultures, and mediums of storytelling. It is difficult to draw direct comparisons between the two.

However, one possible point of similarity is that both characters face existential dilemmas and struggles with their sense of purpose and identity. Hamlet is plagued by doubts about his ability to avenge his father’s murder, and his own worthiness as a human being. Similarly, Fred Flintstone often grapples with his place in society and his ability to provide for his family and live up to his own expectations.

Not a bad answer for a silly question and one that (as far as we can tell) cannot be found on the internet.

GPTs have “read” or “absorbed” a great amount of text but that text isn’t stored in a database; instead the text was used to weight the billions of parameters in the neural net. It is thus possible to run a GPT on a powerful home computer. It would be very slow, since computing each word requires billions of calculations, but unlike storing the internet on your home computer, it is feasible to run a GPT on a home computer or even (fairly soon) on a mobile device.

GPTs work by predicting the next word in a sequence. If you hear the phrase “the Star-Spangled”, for example, you and a GPT might predict that the word “Banner” is likely to come next. This is what GPTs are doing but it would be a mistake to conclude that GPTs are simply “autocompletes” or even autocompletes on steroids.

Autocompletes are primarily statistical guesses based on previously asked questions. GPTs in contrast have some understanding (recall the as if modifier) of the meaning of words. Thus GPTs understand that Red, Green, and Blue are related concepts that King, Queen, Man and Woman are related in a specific way such that a woman cannot be a King. It also understands that fast and slow are related concepts, such that a car cannot be going fast and slow at the same time but can be fast and red and so forth. Thus GPTs are able to “autocomplete” sentences which have never been written before, as we described earlier.2 More generally, it seems likely that GPTs are building internal models to help them predict the next word in a sentence (e.g. Li et al. 2023).

The paper is a work in progress so comments are welcome.

Steven Pinker on existential risk

He is harsh, but my view is not far from his:

The AI-existential-threat discussions are unmoored from evolutionary biology, cognitive psychology, real AI, sociology, the history of technology and other sources of knowledge outside the theater of the imagination. I think this points to a meta-problem. The AI-ET community shares a bad epistemic habit (not to mention membership) with parts of the Rationality and EA communities, at least since they jumped the shark from preventing malaria in the developing world to seeding the galaxy with supercomputers hosting trillions of consciousnesses from uploaded connectomes. They start with a couple of assumptions, and lay out a chain of abstract reasoning, throwing in one dubious assumption after another, till they end up way beyond the land of experience or plausibility. The whole deduction exponentiates our ignorance with each link in the chain of hypotheticals, and depends on blowing off the countless messy and unanticipatable nuisances of the human and physical world. It’s an occupational hazard of belonging to a “community” that distinguishes itself by raw brainpower. OK, enough for today – hope you find some of it interesting.

That is by no means the only harsh paragraph.  Here is the entire dialogue with Richard Hanania.  And be careful what you write in the MR comments section, the AIs are reading you!

How will AI transform childhood?

That is the topic of my latest Bloomberg column, here is one excerpt:

In the future, every middle-class kid will grow up with a personalized AI assistant — so long as the parents are OK with that.

As for the children, most of them will be willing if not downright eager. When I was 4 years old, I had an imaginary friend who lived under the refrigerator, called (ironically) Bing Bing. I would talk to him and report his opinions to my parents and sister.

In the near future, such friends will be quite real, albeit automated, and they will talk back to our children as directly as we wish. Having an AI service for your child will be as normal as having a pet, except the AI service will never bite. It will be carried around in something like a tablet, though with a design that is oriented toward the AI.

Recent developments suggest that AI models can be both commoditized and customized more easily and cheaply than expected. So parents will be able to choose what kind of companion they want their kids to have — in contrast to the free-for-all of the internet. The available services likely will include education and tutoring, text or vocalizations of what the family pet might be thinking, dancing cartoon avatars, and much more. Companies will compete to offer products that parents think will be good for their kids. Some of the AIs might even read bedtime stories (in fact, I’ve already heard some of them).

Many parents may be reluctant to let their kids become attached to an AI. But I predict that most families will welcome it. For one, parents will be able to turn off the connection whenever they wish. Simply clicking a button is easier than yanking an iPad out of a kid’s grasp.

Most of all, letting your kid have an AI companion will bring big advantages. Your child will learn to read and write much faster and better, and will do better in school. Or maybe you want your kid to master Spanish or Chinese, but you can’t afford an expensive tutor who comes only twice a week. Do you want your child to learn how to read music? The AI services will be as limited or as expansive as you want them to be.

It is an open question how quickly schools will embrace these new methods of learning. At some point, however, they will become part of the curriculum. Competitive pressures will make parents reluctant to withhold AI from their kids. Even if the AIs are not present in the classroom, some kids will use them to help do their homework, gaining a big advantage, and the practice will likely spread.

Of course children will use these AIs for purposes far beyond what their parents intend. They will become playthings, companions, entertainers and much more. When I was a kid, with no internet and mediocre TV, I created imaginary worlds in the dirt, or with simple household items, and my parents often had no clue. The AI services will become part of this model of spontaneous play, even if parents try to make them purely educational.

What about teenagers? Well, many parents may allow their kids to speak with AI therapists. It might be better than nothing, and perhaps better than many human therapists.

Recommended, and I do discuss some potential risks as well.

That was then, this is now, hail Nat Friedman edition!

Here is further commentary from Nat:”This is how you know we’re running low on training data…How would the ancient Romans feel if they knew that 2000 years later, we would be using particle accelerators and supercomputers to read their words, preserve them for eternity, and whisper them into the ear of a baby god?…Ascension”