Category: Uncategorized

Why superyachts?

I’ve been reading more polemics against superyachts lately, for instance from Paul Krugman (NYT).  I’ve never been on a superyacht, or even a non-super yacht, but I can give you my “hypothesis from a distance” about them.  A super-wealthy individual wants to convene a group of people, all of whom are talented, or friends, or famous, but not all of whom are super rich.  The point is to offer them some kind of extra special experience, namely the superyacht.  And you can’t convene everybody in normal public spaces, in part because the super rich person is famous and would attract notice, and in part because of the security risk.  Thus enter the superyacht.

I don’t know how useful these convenings are on average (do the critics?), but I don’t think they are merely or even mainly about status-seeking by the super rich.  The desire is to have a focal, locational base for drawing people together and sometimes working on projects of mutual interest.  That said, I don’t doubt the super rich person enjoys seeing others admire a beautiful yacht, but is that so crazy or craven?  It doesn’t have to be a zero-sum pleasure.  That is, I enjoy it if my friends enjoy my (rather modest) backyard deck, but I’m not so concerned about whether they like Alex’s house better, etc.  There is room for fun everywhere!  Even on superyachts.

Thursday assorted links

1. Open AI lessons for science policy.  And Steve Landsburg and GPT-4 are not in synch.

2. “Every single street lamp in New Zealand’s capital city is at risk of plunging without warning on to the footpaths below them.” And can anything stop the feral hog invasion?

3. Survey of Tyler Cowen’s “My Favorite Things.”

4. Brian Potter on how did solar power get cheap.

5. Clearinghouse for LLMs in scientific research workflows.

6. More on AutoGPTs.  And does AI reduce existential risk?

7. Ghana’s concentration camps for witches.

What I’ve been reading and not reading (due to travel)

Colin Kidd, Union and Unionisms: Political Thought in Scotland, 1500-2000.  A very good and well-written look at Scottish views on the Union over the centuries.  Explained conceptually in a nice way, not just a catalog, and tied to religion as well.

Thomas Bartlett, Ireland: A History.  One of the best one-volume introductions to Irish history.

W. Paul Reeve, Let’s Talk About Race and Priesthood.  Argues that the Mormons had relatively universalistic origins, and that Brigham Young was the one who introduced the later segregationist ideas.

There is Peter Turchin, End Times: Elites, Counter-Elites, and the Path of Political Disintegration.

The impressive Jon Elster has just published America Before 1787: The Unraveling of a Colonial Regime.

Do not forget John Cochrane’s The Fiscal Theory of the Price Level, as presented on John’s blog as well.

Coming out is Robin Douglass, Mandeville’s Fable: Pride, Hypocrisy, and Sociability.

Wednesday assorted links

1. Who benefits most from name visibility bias during the journal editorial process?

2. Prendergast watercolor for 500-700k, truly a splendid piece, St. Marks in Venice.  The collection as a whole, while not my taste (“too American” in a very particular direction), shows exquisite taste.  You can learn a lot by studying their choices.  From the Wolf family.  Here is more from their collection.

3. AI Policy Guide, by Matthew Mittelsteadt at Mercatus.  And a clear explanation of the new “autonomous” AIs.

4. Current U.S. defense spending is, in historical terms, at a relative low point.

5. Miami Native, new (non-leftist) magazine on the way, presenting and explicating and enhancing the status of the culture of Miami.  They are looking for contributors.  Mainly a physical copy magazine, planning only a limited presence on-line.

6. Genetic timeline of humans? (speculative)  And I believe in hiring talented 14- to 15-year olds.

Measuring the benefits of the biomedical revolution

That is the topic of my latest Bloomberg column.  Note that for most economic gains, total gdp and per capita gdp give roughly the same answers.  But when it comes to lifesaving, that may no longer be the case.  Here is one excerpt:

Take the vaccines against Covid. Of course the most important fact about them is that they reduce the amount of death and suffering. But what is their economic impact? The vaccines have been most helpful to the most vulnerable, namely the elderly or those with preexisting medical conditions. These are not the most productive cohorts of the economy. So the effectiveness of the vaccines might have actually lowered various social averages, such as per-capita GDP or per-capita productivity.

The extra life is a pure benefit. But to capture that benefit in numbers requires looking at the totals, not just the averages. Labor productivity per hour, for example, won’t necessarily increase. But total labor supply and total population will.

And this:

And what about those subpar returns on biomedical investments? That is a sign that most of the gains from innovation are being reaped by patients, users and consumers — not capitalists. Is that not exactly what everyone has been asking for?

There is much more at the link.  The bottom line is that many of the gains will come through “n,” not per hour productivity.

My favorite things Alaska

I haven’t done  many of these in a while, mostly because I haven’t been in many new states or countries recently.  But Alaska I had never visited before (my remaining state, in fact), so here goes:

Classical music: John Luther Adams.  “The other John Adams,” his reputation continues to rise, now I would like to see one performed live.  I am fan of the sound textures and the broad expanses of his works, even if the programmatic aspects do not always delight me.  Become Ocean is his best known piece.

Popular music: There is Jewel, I guess she is OK, and I can’t think of anyone else.

This is tough!  Nor did Andre Marrou acquit himself especially well over the years.  How about “theatre builder-upper”?  Then I can cite Edward Albee.

Affiliated writer: Jack London, obviously.  Still worth reading, not archaic, has held up remarkably well.

Movies, set in: Plenty of competition here.  There is Herzog’s Grizzly Man, and Never Cry Wolf (oddly forgotten but moving, plus the protagonist is named Tyler, which was rare in the early 1980s), and of course Chaplin’s The Gold RushInto the Wild I haven’t seen.  Maybe I watched Abbott and Costello Lost in Alaska as a kid?  What am I missing?

Artist: Taking the entire cake has to be Alaskan indigenous art, but who should be the favorite?  I can’t bring myself to elevate Florence Nupok Malewotkuk to the number one position, so perhaps Nathan Jackson, who did Tlingit art?

Watercolor, affiliated with: Try this John La Farge, currently up at auction.

Throat singer: A strong area, but which ones exactly are from Alaska rather than Canada?  Janet Aglukkaq?  Don’t ask me!

Here is a good essay on Alaskan totem poles, from Eyak, Tlingit, Haida and Tsimshian cultures.

I can’t name a mask-maker, but the masks are arguably the highlight of the Alaskan indigenous tradition.

Any NBA players?  Am I supposed to like Carlos Boozer?

The bottom line: There is more than you might think at first.

From the comments, on AI safety

This is from Richard Ngo, who works on the governance team at OpenAI:

A few points:
1. I agree that the alignment community has generally been remiss in not trying hard enough to clarify the arguments in more formal papers.
2. The only peer-reviewed paper making the case for AI risk that I know of is: https://onlinelibrary.wiley.com/doi/10.1002/aaai.12064. Though note that my paper (the second you linked) is currently under review at a top ML conference.
3. I don’t think that a formal model would shed much light here. My goal in writing my paper was to establish misaligned power-seeking AGI as a credible scientific hypothesis; I think that most who think it’s credible would then agree that investigating it further should be a key priority, whether or not their credences are more like 10% or more like 90%.

From this batch of comments.  Here is Richard on Twitter.

Tuesday assorted links

1. Review of the new Philip Wallach book on Congress (Rep. Katie Porter’s book too).

2. Good Ding vs. Nepo coverage.

3. On properly translating Macron (having dealt with French diplomats, both through translation and not, I agree with the general points about context).  That said, the whole world has to receive the proper message, as a matter of common knowledge, and arguably he failed in that regard.

4. How AI differs in warfare.  And something about “BabyAGI.”  Self-improving AI making its debut?  And lots of discussion.

5. The roots of our military recruiting crisis, good and interesting piece.

6. Is Tupperware toast? And upscale compost (WSJ).

“Date me” docs

Here is one from Katja Grace, by the way I know her a bit and very much like her and find her very smart (NB: not interested in comments mocking Katja, put them somewhere else, I will delete them).  But that is not my main point today.  Do such documents work?  I have been hearing of them more often lately.  And how should we model them?

Should we think of them as batch auctions of a sort, namely wanting to get in a lot of bids at once rather than sequentially?  Which kinds of people should prefer such a batch auction?  (Btw, is there any paper in Science or Nature on this?  Should there be?)

Are they better suited for polyamory than monogamy?  Are batch auctions better suited for polyamory?  Because the process is more like assembling a portfolio?

Is this all somehow better suited for San Francisco and other “Woke” cultures, where perhaps asking someone out on a date counts as a microaggression?  I suppose the Date Me document gives you permission to reach out?  I am curious to read or hear some serious takes on this phenomenon.

This GPT-4 answer speaks for itself

TC: Which is the leading attempt to publish a canonical paper on AGI risk, in a leading science journal, refereed of course. The paper should have a formal model or calibration of some sort, working toward the conclusion of showing that the relevant risk is actually fairly high. Is there any such thing? Please answer as would a leading scientific expert in the relevant fields, including computer science and possibly also population ecology, systems theory, and other possible areas.

GPT-4: As a leading scientific expert, I must clarify that while there are numerous papers discussing various aspects of AGI (Artificial General Intelligence) risk, no single paper can be labeled as the “canonical” paper on the subject. It is worth noting that the field of AGI risk is interdisciplinary, involving computer science, artificial intelligence, ethics, population ecology, systems theory, and other domains. Consequently, the body of research on AGI risk is diverse, with various approaches and models, some of which may involve formal calibration.

One paper that may be of interest to you is “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” by Brundage et al., published in 2018. This paper, co-authored by experts from various institutions, including OpenAI, the University of Oxford, and the Center for a New American Security, discusses potential risks from the malicious use of AI and offers recommendations for researchers, policymakers, and industry practitioners. While this paper does not provide a formal calibration model, it does shed light on the potential risks associated with AGI.

Another important work in the field is Nick Bostrom’s book “Superintelligence: Paths, Dangers, Strategies,” which discusses the risks associated with the development of AGI and offers possible strategies to mitigate those risks. Bostrom’s work has been influential in shaping the discourse on AGI risk, but it is not a refereed journal article….[and some concluding GPT blather]

Perhaps I am too quick to the draw here, but I don’t see this answer as far from “nothing, there is no such paper.”  Comments are open, of course, if you can surpass GPT-4, which perhaps is better at evolving into a world-destroying entity than it is at finding citations.  Further prods did not change the basic answer, and if anything GPT models tend to confabulate or hallucinate entries, not deny them.  Or perhaps in this case it is hiding the refereed articles and deceiving us?

And maybe I’ve missed it, but I’ve also never seen Scott Alexander or Zvi point to such a paper, or even a good example of a rejected paper aiming in this direction.  Nor have I seen them make a big stink about the absence of such a paper, though in virtually any other area they will hit you with a fire hose of citations and links to published models in referred journals.

I’ve also asked a whole bunch of “people who ought to know” and not received a single concrete answer, one such individual responding immediately with the answer “zero.”

In part, I would like to encourage those fascinated with AGI risk to try to create and publish such a paper, or perhaps to fund it or otherwise encourage it.  Something more systematically fleshed out than “10 reasons why lists of 10 reasons might be a winning strategy.”  It would go a long way to giving the idea more credibility in the scientific community, not to mention with yours truly.  How about NatureScience?  Somewhere else?  I know top journals can be closed or unfair, but at the very least you can put the paper and the associated referee reports on-line for the rest of us to judge.  And then try it in a lesser journal, it still will get traction and you will get valuable feedback, of a very different kind than from on-line forums.

If the chance of existential risk from AGI is 99 percent, or 80 percent, or even 30 percent, surely some kind of modeled demonstration of the basic mechanics and interlocking pieces is possible.  Indeed a certain kind of clarity should be evident, at least conditional on the more extreme views being correct.  In general, I am not a fan of the “you should work on this!’ strategy, but if you think the whole future of the entire world is at stake…shouldn’t you be obsessed with working on such a thing, if only to convince the rest of us?  And in as many different formats as possible, including the methods most commonly recognized by the scientific community?

In the meantime, if you are a young person interested in this issue, and you observe such a paucity of refereed, published model-based papers in the area — consider any area just to get your mind off the fraught and emotional topic of AGI existential risk — what would you infer from that absence?

And what if said community of commentators almost universally insisted they were the most extreme of rationalists?

Now none of this means the claims about extreme risk are wrong.  But you can think of it as a kind of propaedeutic to reading the literature and current debates.

Addendum: I have looked at papers such as these:

https://arxiv.org/abs/2206.13353, https://arxiv.org/abs/2209.00626, https://arxiv.org/abs/2109.13916

Whatever you think of them, they are not close to counting for my search.

Monday assorted links

1. Benevolent sexism.  And should single American women look for suitors abroad?

2. Does summer learning loss replicate? And on Covid learning loss recovery: “On average, we find that 20% of test score losses are recovered in English language arts (ELA) by 2022, compared to 37% in math.”

3. Balenciega beige unicorn sneakers.  And, via Yana, MEN’S TRASH BAG LARGE POUCH IN BLACK.

4. Does Ozempic improve impulse control? (speculative)

5. No adverse labor supply effects from the expanded child tax credit (short-term only, though).

6. “Apparently, tsunami survivors were inclined to assume greater financial risk in the short-term while rebuilding their lives after the disaster.”  Link here.

Science is proceeding

In a few days, a £1.4bn probe will be blasted into space on an eight-year mission to find signs of life on other worlds in our solar system. The spacecraft will not head to local destinations such as the planet Mars, however. Instead, it will fly into deep space and survey the icy moons of distant Jupiter. In doing so, it will open up a new chapter in the hunt for extraterrestrial life.

The Jupiter Icy Moons Explorer – or Juice – will exploit an unexpected feature of our solar system. The greatest reserves of water turn out to exist on worlds very far from Earth, in deep space, and in orbit around the giant planets Jupiter and Saturn. Juice is the first mission to be launched specifically to explore these remote worlds.

Here is the full story, via mdschultz.

Where is the best place to live if a cataclysm comes?

My counterintuitive answer is northern Virginia, or at least the general DC area, putting LDS options aside.  I’m talking about asteroids, super-volcanos, and nuclear exchanges, not AGI risk.  Here is a Bloomberg column on that topic:

I have a counterintuitive answer: If you live in a dense urban area, stay put — especially if, like me, you live in the suburbs of Washington, D.C.

The biggest advantage of the Washington region is that, in the case of a real catastrophe, it would receive a lot of direct aid. It’s not just that Congress and the White House are nearby — so are the Pentagon, the FBI, the CIA and hundreds if not thousands of government agencies. Insofar as there might be an emergency response to a cataclysmic event, the Washington area will be prioritized.

The region also has plenty of hospitals and doctors, and a wide variety of law-enforcement units — including the various federal agencies as well as police from Maryland, Virginia and D.C. If you care about order being restored, Washington will be better than most places.

Of course, a counterargument is that Washington is more likely than most places to be hit by a cataclysmic event, especially if it involves a nuclear exchange or some other weapon of mass destruction. But there’s “good news,” scare-quotes intended: If a foreign enemy is truly intent on targeting America’s capital, the conflict may be so extreme that it won’t matter where you go. (If I were a foreign power attacking the US, Washington would not be my first choice as a target, as it would virtually guarantee the complete destruction of my own country.)

I consider — and reject — New Zealand and the American West as alternate options.  New Zealand might not even let you in.