Category: Web/Tech

This GPT-4 answer speaks for itself

TC: Which is the leading attempt to publish a canonical paper on AGI risk, in a leading science journal, refereed of course. The paper should have a formal model or calibration of some sort, working toward the conclusion of showing that the relevant risk is actually fairly high. Is there any such thing? Please answer as would a leading scientific expert in the relevant fields, including computer science and possibly also population ecology, systems theory, and other possible areas.

GPT-4: As a leading scientific expert, I must clarify that while there are numerous papers discussing various aspects of AGI (Artificial General Intelligence) risk, no single paper can be labeled as the “canonical” paper on the subject. It is worth noting that the field of AGI risk is interdisciplinary, involving computer science, artificial intelligence, ethics, population ecology, systems theory, and other domains. Consequently, the body of research on AGI risk is diverse, with various approaches and models, some of which may involve formal calibration.

One paper that may be of interest to you is “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” by Brundage et al., published in 2018. This paper, co-authored by experts from various institutions, including OpenAI, the University of Oxford, and the Center for a New American Security, discusses potential risks from the malicious use of AI and offers recommendations for researchers, policymakers, and industry practitioners. While this paper does not provide a formal calibration model, it does shed light on the potential risks associated with AGI.

Another important work in the field is Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies,” which discusses the risks associated with the development of AGI and offers possible strategies to mitigate those risks. Bostrom’s work has been influential in shaping the discourse on AGI risk, but it is not a refereed journal article….[and some concluding GPT blather]

Perhaps I am too quick to the draw here, but I don’t see this answer as far from “nothing, there is no such paper.”  Comments are open, of course, if you can surpass GPT-4, which perhaps is better at evolving into a world-destroying entity than it is at finding citations.  Further prods did not change the basic answer, and if anything GPT models tend to confabulate or hallucinate entries, not deny them.  Or perhaps in this case it is hiding the refereed articles and deceiving us?

And maybe I’ve missed it, but I’ve also never seen Scott Alexander or Zvi point to such a paper, or even a good example of a rejected paper aiming in this direction.  Nor have I seen them make a big stink about the absence of such a paper, though in virtually any other area they will hit you with a fire hose of citations and links to published models in referred journals.

I’ve also asked a whole bunch of “people who ought to know” and not received a single concrete answer, one such individual responding immediately with the answer “zero.”

In part, I would like to encourage those fascinated with AGI risk to try to create and publish such a paper, or perhaps to fund it or otherwise encourage it.  Something more systematically fleshed out than “10 reasons why lists of 10 reasons might be a winning strategy.”  It would go a long way to giving the idea more credibility in the scientific community, not to mention with yours truly.  How about NatureScience?  Somewhere else?  I know top journals can be closed or unfair, but at the very least you can put the paper and the associated referee reports on-line for the rest of us to judge.  And then try it in a lesser journal, it still will get traction and you will get valuable feedback, of a very different kind than from on-line forums.

If the chance of existential risk from AGI is 99 percent, or 80 percent, or even 30 percent, surely some kind of modeled demonstration of the basic mechanics and interlocking pieces is possible.  Indeed a certain kind of clarity should be evident, at least conditional on the more extreme views being correct.  In general, I am not a fan of the “you should work on this!’ strategy, but if you think the whole future of the entire world is at stake…shouldn’t you be obsessed with working on such a thing, if only to convince the rest of us?  And in as many different formats as possible, including the methods most commonly recognized by the scientific community?

In the meantime, if you are a young person interested in this issue, and you observe such a paucity of refereed, published model-based papers in the area — consider any area just to get your mind off the fraught and emotional topic of AGI existential risk — what would you infer from that absence?

And what if said community of commentators almost universally insisted they were the most extreme of rationalists?

Now none of this means the claims about extreme risk are wrong.  But you can think of it as a kind of propaedeutic to reading the literature and current debates.

Addendum: I have looked at papers such as these:

https://arxiv.org/abs/2206.13353, https://arxiv.org/abs/2209.00626, https://arxiv.org/abs/2109.13916

Whatever you think of them, they are not close to counting for my search.

Lessons from the Baruch Plan for Nuclear Weapons

The invention of atomic energy posed a novel global challenge: could the technology be controlled to avoid destructive uses and an existentially dangerous arms race while permitting the broad sharing of its benefits? From 1944 onwards, scientists, policymakers, and other technical specialists began to confront this challenge and explored policy options for dealing with the impact of nuclear technology. We focus on the years 1944 to 1951 and review this period for lessons for the governance of powerful technologies, and find the following: Radical schemes for international control can get broad support when confronted by existentially dangerous technologies, but this support can be tenuous and cynical. Secrecy is likely to play an important, and perhaps harmful, role. The public sphere may be an important source of influence, both in general and in particular in favor of cooperation, but also one that is manipulable and poorly informed. Technical experts may play a critical role, but need to be politically savvy. Overall, policymaking may look more like “muddling through” than clear-eyed grand strategy. Cooperation may be risky, and there may be many obstacles to success.

That is by Waqar Zaidi and Allan Dafoe, at the Centre for Governance of AI, exactly the kind of work people should be doing.

Did Ottoman Sultans ban print?

Did printing transform the Ottoman Empire? And what took the Ottomans so long to print? Much of the scholarship surrounding the topic of Ottoman printing, or the occurrence of printing within the Ottoman Empire (1453–1922), is structured around these two related frameworks. In this essay, I argue that these frameworks are ahistorical because they predicate Ottoman printing on the European experience of print. To support this point, I examine the disproportionate role played by certain early modern European accounts of Ottoman printing within Western and Arabic historiography. In particular, I examine the life cycle of scholars’ belief that Ottoman sultans banned printing, which I contrast with extant documentation for the imperial Porte’s stance on printing. I argue that the sources available to scholars today do not support the notion that the sultans banned printing. Rather, they demonstrate that this claim arose from early modern European scholars’ search to articulate their sense of Ottoman inadequacy through explanations for why Ottomans did not print. The history of this particular line of inquiry is significant, I argue, because many scholars continue to probe the issue of why Ottomans did not print. In so doing, they maintain the expectation that print would revolutionize society, even though they have begun questioning the existence of the ban.

That is from Kathryn A. Schwartz, in Print History (jstor).  Via Benedikt A.

The Nuclear Non-proliferation Treaty and existential AGI risk

The Nuclear Non-Proliferation Treaty, activated in 1970, has been relatively successful in limiting nuclear proliferation.  When it comes to nuclear weapons, it is hard to find good news, but the treaty has acted as one deterrent of many to nation-states acquiring nuclear arms.  Of course the treaty works, in large part, because the United States (working with allies) has lots of nuclear weapons, a powerful non-nuclear military, de facto control of SWIFT, and so on.  We strongly encourage nations not to go acquiring nuclear weapons — just look at the current sanctions on Iran, noting the policy does not always succeed.

One approach to AI risk is to treat it like nuclear weapons and also their delivery systems.  Let the United States get a lead, and then hope the U.S. can (in conjunction with others) enforce “OK enough” norms on the rest of the world.

Another approach to AI risk is to try to enforce a collusive agreement amongst all nations not to proceed with AI development, at least along certain dimensions, or perhaps altogether.

The first of these two options seems obviously better to me.  But I am not here to argue that point, at least not today.  Conditional on accepting the superiority of the first approach, all the arguments for AI safety are arguments for AI continuationism.  (And no, this doesn’t mean building a nuclear submarine without securing the hatch doors.)  At least for the United States.  In fact I do support a six-month AI pause — for China.  Yemen too.

It is a common mode of presentation in AGI circles to present wordy, swirling tomes of multiple concerns about AI risk.  If some outside party cannot sufficiently assuage all of those concerns, the writer is left with the intuition that so much is at stake, indeed the very survival of the world, and so we need to “play it safe,” and thus they are lead to measures such as AI pauses and moratoriums.

But that is a non sequitur.  The stronger the safety concerns, the stronger the arguments for the “America First” approach.  Because that is the better way of managing the risk.  Or if somehow you think it is not, that is the main argument you must make and persuade us of.

(Scott Alexander has a new post “Most technologies aren’t races,” but he doesn’t either choose one of the two approaches listed above, nor does he outline a third alternative.  Fine if you don’t want to call them “races,” you still have to choose.  As a side point, once you consider delivery systems, nuclear weapons are less of a yes/no thing than he suggests.  And this postulated take is a view that nobody holds, nor did we practice it with nuclear weapons: “But also, we can’t worry about alignment, because that would be an unacceptable delay when we need to “win” the AI “race”.”  On the terminology, Rohit is on target.  Furthermore, good points from Erusian.  And this claim of Scott’s shows how far apart we are in how we consider institutional and also physical and experimental constraints: “In a fast takeoff, it could be that you go to sleep with China six months ahead of the US, and wake up the next morning with China having fusion, nanotech, and starships.”)

Addendum:

As a side note, if the real issue in the safety debate is “America First” vs. “collusive international agreement to halt development,” who are the actual experts?  It is not in general “the AI experts,” rather it is people with experience in and study of:

1. Game theory and collective action

2. International agreements and international relations

3. National security issues and understanding of how government works

4. History, and so on.

There is a striking tendency, amongst AI experts, EA types, AGI writers, and “rationalists” to think they are the experts in this debate.  But they are only on some issues, and many of those issues (“new technologies can be quite risky”) are not so contested. And because these individuals do not frame the problem properly, they are doing relatively little to consult what the actual “all things considered” experts think.

The game theory of an AI pause

The issues go well beyond China:

What might countries such as Israel or Japan do if their most important ally decides to pause work on AI? Might this not lead to a proliferation of GPT-like models across more countries — exactly what the pause advocates were trying to avoid?

And if the goal is to “Pause Giant AI Experiments,” which is what the letter is titled, what of smaller ones? What if a small company has an ongoing experiment but is nowhere close to having an effective product? A six-month suspension would damage their future business prospects and serve to entrench the incumbents. What if one of those new upstarts is working to come up with very good safety and alignment procedures? What if its AI might help cure cancer?

There is little evidence that proponents of a delay have thought through the major secondary effects of their 600-word proposal. Maybe they could have made a stronger argument if they they’d had more time to prepare — say, another six months?

Here is my full Bloomberg column.

Does natural selection favor AIs over humans? Model this!

Dan Hendrycks argues it probably favors the AIs, paper here.  He is a serious person, well known in the area, home page here, and he gives a probability of doom above 80%.

I genuinely do not understand why he sees so much force in his own paper.  I am hardly “Mr. Journal of Economic Theory,” and I have plenty of papers that you could describe as a string of verbal arguments, but here is an instance where I would find an actual model very useful.  Evolutionary biology is full of them, as is economics.  Why not apply them to the AI Darwinian process?  Why leap to such extreme conclusions in the meantime?

Here are two very simple ideas I would like to see incorporated into any model:

1. At least in the early days of AIs, humans will reproduce and recommend those AIs that please them.  Really!  We already see this with people preferring GPT-4 to GPT 3.5, the popularity of Midjourney 5, and so on.  So, at least for a while, AIs will evolve to please us.  What that means over time is perhaps unclear (maybe some of us opt for ruthless?  But do we all seek to hire ruthless employees and RAs?  I for one do not), but surely it should be incorporated into the basic model.  How much ruthlessness do we seek to inject into the agents who do our bidding?  It depends on context, and so is it the finance bots who will end the world?  Or perhaps the system will be tolerably decentralized and cooperative to a fair degree.  If you are skeptical there, OK, but isn’t that the main question you need to address?  And please do leave in the comments references to models that deploy these two assumptions.  (With the world at stake, surely you can do better than those bikers did!)

2. Humans can apply principal-agent contracts to the AI (again, at least for some while into the evolutionary process).  Keep in mind if the AIs are risk-neutral (are they?), perhaps humans can achieve a first-best result from the AIs, just as they can with other humans.  If the AIs are risk-averse, in the final equilibrium they will shirk too much, but they still do a fair amount of work under many parameter values.  If they shirk altogether, we might stop investing in them, bringing us back to the evolutionary point.

Neither of those points are the proverbial “rocket science,” rather they are super-basic.  Yet neither plays much if any role in the Hendrycks paper.  There are some mentions of various points on for instance p.17, but I don’t see a clear presentation of modeling the human choices in a decentralized process.  p.21 does consider the decentralized incentives point a bit more, but it consists mostly of two quite anomalous examples, such as a dog pushing kids into the Seine to later save them (how often?), and “the India cobra story,” which is likely outright false.  It doesn’t offer sound anecdotal empirics, or much theoretical analysis of which kinds of assistants we will choose to invest in, again set within a decentralized process.

Dan Hendryks, why are you so pessimistic?  Have you built such models, fleshing out these two assumptions, and simply not shown them to us?  Please show!

If the very future of the world is at stake, why not build such models?  Surely they might help us find some “outs,” but of course the initial problem has to be properly specified.

And more generally, what is your risk communication strategy here?  How secure, robust, and validated does your model have to be before you, a well-known figure in the field and Director at the Center for AI Safety, would feel justified in publicly announcing the > 80% figure?  Which model of risk communication practices (as say validated by risk communication professionals) are you following, if I may ask?

In the meantime, may I talk you down to 79% chance of doom?

How will AI change *other* policy debates?

Not debates about AI, no the other ones.  For one thing, more ideas and more intelligence will mean more projects.  That raises the value of permitting reform, and raises the value of YIMBY.  But perhaps the most surprising conclusion comes on the immigration issue, with apologies to Garett Jones:

Which policy issue might decrease in importance? My prediction is immigration. GPT-4 is already passing a wide swath of professional tests, ranging from bar exams to medical qualifiers to economics exams. The service still requires more integration with other software tools, but in the not-too-distant future, the US will have added the equivalent of many millions of intelligent minds (albeit not bodies) to its labor force.

I have long favored boosting America’s immigration flow by about three times, to give the US an intake roughly on a per-capita par with Canada and Australia. This is still a good idea, but it should be done in a different way. Rather than more high-skilled immigration, the new priority might be more lower-wage migrants. The US might want a “bar-belled” immigration policy, which gives priority to AI researchers and engineers on the high-wage end, and workers such as construction laborers on the low-wage end.

The AI researchers, by creating more and better AI, would serve as a substitute for many other potential high-skilled immigrants. But all those new ideas will need people to turn them into actual projects in the physical world. In contrast, importing additional humanities professors from Europe no longer seems so vital when you can ask the AI instead.

Here is my full Bloomberg column.  To be sure, we need the smart, ambitious carpenters!

My Conversation with Jonathan GPT Swift

Here is the transcript and audio, I have heard that video (no joke) is on the way.  Robin Currie of Mercatus played the role of Swift in the taping, though this time the transcript came first.  Here is part of the summary:

GPT Swift discusses his support for the Church of Ireland, his shift from the Whigs to the Tories, and his opposition to William Wood’s copper coinage in Ireland. He also talks about his works, including Gulliver’s Travels and A Modest Proposal, and his skepticism of moral and intellectual progress. Swift addresses rumors about his relationship with Esther Johnson and his fascination with scatological themes in his works. He also discusses his early life in England, his intellectual mentor Sir William Temple, and his jovial attitude towards death.

Here is one excerpt:

COWEN: Queen Anne: overrated or underrated? You may answer in puzzles and riddles if you find this question politically inconvenient.

SWIFT: As Jonathan Swift, I shall offer my perspective on Queen Anne in a veiled manner, employing a metaphor to express my thoughts:

Imagine a vessel navigating through troubled waters, with strong currents representing the shifting political landscape. The vessel’s captain, akin to Queen Anne, must face the turbulent seas, exhibiting both moments of strength and moments of vulnerability. Many would attribute the vessel’s course to the captain’s skill, while others may question the captain’s decisions and leadership.

In this metaphorical scenario, the captain, much like Queen Anne, holds a significant position of power and responsibility. However, the captain’s competence may be a subject of debate. Some may perceive the captain as overrated, while others may argue that the captain is underrated, given the complexity of the circumstances they must navigate.

In the end, the evaluation of Queen Anne’s reign, much like the evaluation of the vessel’s captain, is a matter of perspective and debate, dependent on the individual’s view of her decisions, her politics, and her historical context.

How about this for humor:

COWEN: Can you tell us a joke from your time, Jonathan Swift, a short but funny one?

SWIFT: I shall share a short jest that was popular during my time:

Why do we never tell secrets on a farm?

Because the potatoes have eyes and the corn has ears.

COWEN: So, what is a joke from your time that *you* would find funny?

SWIFT: My humor often leaned towards satire and irony. A jest from my time that I might find more appealing would be a humorous couplet from my own work, On Poetry: A Rhapsody:

“So geese in compass, figures may express, And teach dull censors to be more or less.”

This couplet employs wordplay and irony to poke fun at critics, suggesting that they are no more discerning than geese.

Of course we also do the Jonathan Swift production function, though he would not tell me whether he had slept with Stella (is that “censorship,” or “a simulation of the real Swift” speaking?).  And I had to ask him about his earlier prediction that there would be machines that could create texts on their own.

As for method, here is TC:

Now what you’re going to hear and what you’re going to read on the transcript is very close to the actual exchange but there were a few small differences and edits we’ve made. Very often the GPT would begin the answer with, “As Jonathan Swift.” We simply took that out. Some of the longer answers, there were resummaries at the end. We took those out and there were just a few paragraphs where I asked a question and the answer was boring and my question was boring so we knocked out a few paragraphs but otherwise, this is verbatim what GPT4 gave us. I did not keep on repeating prompts trying to get the answer I wanted. This is really awfully close to the dialogue.

Do read the whole thing.  It is too “textbook-y” in parts, but overall I was extremely impressed.

GPT-4 Does the Medical Rounds

GPT4 passed the medical licensure exam but the critics want to know how does it perform in the real world? Zak Kohane, pediatric endocrinologist, data scientist, and chair of the Harvard Chair of the Department of Biomedical Informatics at Harvard Medical School has apparently been working with GPT4 for about 6 months. He has a forthcoming book (with Peter Lee and Carey Goldberg). He writes:

“How well does the AI perform clinically? And my answer is, I’m stunned to say: Better than many doctors I’ve observed.”—Isaac Kohane MD

That’s from a review of the book by Eric Topol. Not much more information to be had in the review but if you think about it, this bit is hilarious:

I’ve thought it would be pretty darn difficult to see machines express empathy, but there are many interactions that suggest this is not only achievable but can even be used to coach clinicians to be more sensitive and empathic with their communication to patients.

The permanent pause?

Here is an Elon Musk-signed petition, with many other luminaries, calling for a pause in “Giant” AI experiments.  Here is one excerpt:

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Few people are against such developments, or at least most of them.  Yet this passage, to my eye, shows how few realistic, practical alternatives the pausers have.  Does regulation of any area, even simpler ones than AI, ever work so well?  Exactly how long is all this supposed to take?  How well do those same signers expect our Congress to handle, say, basic foreign policy decisions?  The next debt ceiling crisis?  Permitting reform?

Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them?  Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers?  And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”?

Might we end up with a regulatory institution as good as the CDC?

By the way, what does it mean to stop the “progress,” but not stop and not cease to improve the safety testing?  Are those two goals really so separable?

Overall this petition is striking for its absence of concrete, practicable recommendations, made in the light of the semi-science of political economy.  You can think of it as one kind of evidence that these individuals are not so very good at predicting the future.

A brief observation on AGI risk and employee selection (from my email)

  • Stunting growth now in the development of artificial intelligence just makes the probability of a bad future outcome more likely, as the people who are prosocial and thoughtful are more likely to be discouraged from the field if we attach a stigma to it. My view is that most people are good and care about others and our collective future. We need to maintain this ratio of “good people” in AI research. We can’t have this become the domain of malevolent actors. It’s too important for humanity.

That is from Ben R.

Existential risk, AI, and the inevitable turn in human history

In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least.  For my entire life, and a bit more, there have been two essential features of the basic landscape:

1. American hegemony over much of the world, and relative physical safety for Americans.

2. An absence of truly radical technological change.

Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.

In other words, virtually all of us have been living in a bubble “outside of history.”

Now, circa 2023, at least one of those assumptions is going to unravel, namely #2.  AI represents a truly major, transformational technological advance.  Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.

#1 might unravel soon as well, depending how Ukraine and Taiwan fare.  It is fair to say we don’t know, nonetheless #1 also is under increasing strain.

Hardly anyone you know, including yourself, is prepared to live in actual “moving” history.  It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad.  In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.

I am reminded of the advent of the printing press, after Gutenberg.  Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits.  But it also created writings by Lenin, Hitler, and Mao’s Red Book.  It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history.  How about the Wars of Religion and the bloody 17th century to boot?  Still, if you were redoing world history you would take the printing press in a heartbeat.  Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?

But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum.  We don’t know how to respond psychologically, or for that matter substantively.  And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer).  No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.

The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring.  No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring.  No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly).  No one.  Not you, not Eliezer, not Sam Altman, and not your next door neighbor.

How well did people predict the final impacts of the printing press?  How well did people predict the final impacts of fire?  We even have an expression “playing with fire.”  Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).

So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response.  Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely.  Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them.  I have even funded some of this work through Emergent Ventures.

I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern.  No one can foresee those futures!  Once you keep up the arguing, you also are talking yourself into an illusion of predictability.  Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become.  It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison.  The case for destruction is so much more readily articulable — “boom!”  Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns.  (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.)  Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagineAll the possibilities are distant, I cannot stress that enough.  The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.

Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances.  “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.

I would put it this way.  Our previous stasis, as represented by my #1 and #2, is going to end anyway.  We are going to face that radical uncertainty anyway.  And probably pretty soon.  So there is no “ongoing stasis” option on the table.

I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?”  And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.

With AI, do we get positives?  Absolutely, there can be immense benefits from making intelligence more freely available.  It also can help us deal with other existential risks.  Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders.  And should we wait, and get a “more Chinese” version of the alignment problem?  I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table.  We can’t even resurrect WTO or make the UN work or stop the Ukraine war.

Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence?  That has not the self-confidence to confidently confront a big dose of more intelligence?  Dare I wonder if such societies might not perish under their current watch, with or without AI?  Do you really want to press the button, giving us that kind of American civilization?

So we should take the plunge.  If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this.  Don’t be suckered into taking their bait.  The longer a historical perspective you take, the more obvious this point will be.  We should take the plunge.  We already have taken the plunge.  We designed/tolerated our decentralized society so we could take the plunge.

See you all on the other side.

New Emergent Ventures winners, 25th cohort

Duncan McClements, 17, incoming at King’s College Cambridge, economics, general career and research support.

Jasmine Wang and team (Jasmine is a repeat winner), Trellis, AI and the book.

Sophia Brown, Berlin/Brooklyn, to study the State Department,and general career development.

Robert Tolan, western Ireland, farmer and math Olympiad winner, YIMBY by street for Ireland.

Conor Durkin, Chicago, to write a Chicago city Substack.

Guido Putignano, Milan/Zurich, to do a summer internship in computation bio for cell therapies, at Harvard/MIT.

Michelle K. Huang, to revitalize Japanese real estate and to enable a creative community in Japan, near Kyoto.

Rasheed Griffith, repeat winner, to found a Caribbean think tank.

The Fitzwilliam, a periodical of ideas, Ireland.  To expand and built it out, Fergus McCullough and Sam Enright, both repeat winners.

Lyn Stoler, Los Angeles, general career development and to develop material for a new pro-growth, pro-green agenda for states and localities.

Gwen Lester, Chicago, to develop a center for abused, battered, and sexually abused women, namely GLC Empowerment Center, also known as Nana’s House.

Sabrina Singh, Ontario, pre-college, to help her study of neurotechnology.

And Emergent Ventures Ukraine:

Isa Hasenko, eastern Ukraine, medical care for eastern Ukraine, performed by a system of digital information, using a real-time tracking system, to trace every allocation.  He works with Fintable.io and MissionKharkiv.com.

Stephan Hosedlo, Lviv, to expand his company selling farm products and herbal products, and to buy a tractor.

Olesya Drashkaba, Kyiv, Sunseed Art, a company to market Ukrainian art posters around the world.

Peter Chernyshov, Edinburgh, mathematician, to run math education project — Kontora Pi — to teach advanced math for talented kids and school teachers in Ukraine.  To produce more math videos and to recruit more teachers around Ukraine.

Andrew Solovei, western Ukraine, to build out a network to compensate small scale Ukrainian volunteers in a scalable and verifiable manner.

Olena Skyrta, Kyiv, to start a for-profit that will tie new scientific innovations to Ukrainian and other businesses.

Yevheniia Vidishcheva, Kyiv, theatrical project to travel around Ukraine.

Alina Beskrovna, Mariupol and Harvard Kennedy School, general career support and to work on the economic reconstruction of Ukraine.

Are social media making us miserable?

Stuart Richie rebuts some of the recent studies:

And here’s the thing: when the authors of the “Facebook arrival” study raised their standards in this way, running a correction for multiple comparisons, all the results they found for well-being were no longer statistically significant. That is, a somewhat more conservative way of looking at the data indicated that every result they found was statistically indistinguishable from a scenario where Facebook had no effect on well-being whatsoever.

Now let’s turn to the second study, which was a randomised controlled trial where 1,637 adults were randomly assigned to shut down their Facebook account for four weeks, or go on using it as normal. Let’s call it the “deactivating Facebook” study. This “famous” study has been described as “the most impressive by far” in this area, and was the only study cited in the Financial Times as an example of the “growing body of research showing that reducing time on social media improves mental health”.

The bottom-line result was that leaving Facebook for a month led to higher well-being, as measured on a questionnaire at the end of the month. But again, looking in a bit more detail raises some important questions.

First, the deactivation happened in the weeks leading up to the 2018 US midterm elections. This was quite deliberate, because the researchers also wanted to look at how Facebook affected people’s political polarisation. But it does mean that the results they found might not apply to deactivating Facebook at other, less fractious times – maybe it’s particularly good to be away from Facebook during an election, when you can avoid hearing other people’s daft political opinions.

Second, just like the other Facebook study, the researchers tested a lot of hypotheses – and again, they used a correction to reduce false-positives. This time, the results weren’t wiped out entirely – but almost. Of the four questionnaire items that showed statistically-significant results before the correction, only one – “how lonely are you?” – remained significant after correction.

It’s debatable whether even this result would survive the researchers corrected for all the other statistical tests they ran. Not only that, but they also ran a second model, controlling for the overall amount of time people used Facebook, and this found even fewer results than the first one.

Third, as well as the well-being questionnaire at the end of the study, the participants got daily text messages asking them how happy they were, among other questions. Oddly, these showed absolutely no effect of being off Facebook – and not even the slightest hint of a trend in that direction.

Here is the entire piece, which is well thought out.

What should I ask Kevin Kelly?

From Wikipedia:

Kevin Kelly (born 1952) is the founding executive editor of Wired magazine, and a former editor/publisher of the Whole Earth Review. He has also been a writer, photographer, conservationist, and student of Asian and digital culture

Among Kelly’s personal involvements is a campaign to make a full inventory of all living species on earth, an effort also known as the Linnaean enterprise. He is also sequencing his genome and co-organizes the Bay Area Quantified Self Meetup Group.

His Out of Control is a wonderful Hayekian book.  His three-volume Vanishing Asia is one of the greatest picture books of all time.  His new book (I haven’t read it yet) is Excellent Advice for Living: Wisdom I Wish I’d Known Earlier.  Here is Kevin on Twitter, here is his home page.

I will be doing a Conversation with him, so what should I ask?