Alex Murrell, The Age of Average

One of the best essays of this year, though I think the author considerably underrates how much heterogeneity we are building on the internet, globally, and also through GPT models.  Nonetheless an excellent read with some hard-hitting points and good photos to illustrate.  Here is one excerpt:

This article argues that from film to fashion and architecture to advertising, creative fields have become dominated and defined by convention and cliché. Distinctiveness has died. In every field we look at, we find that everything looks the same.

Welcome to the age of average.

Here is a photo of some (supposedly different) current cars:

You can read the whole thing here.  Note that in my model, the homogenizations we do observe spring precisely from the greater diversity on the fringes of the distribution.  People at the edges can get exactly what they want, so the fight over the mainstream middle becomes all the more mainstream, as it is targeting the true conformists in that particular area.

Thursday assorted links

1. More Leopold on alignment.  Leopold continues to be good on this topic.

2. I think gated, but here is Scott Alexander on my AI argument.  I am a big fan of Scott’s, but this is a gross misrepresentation of what I wrote.  Scott ignores my critical point that this is all happening anyway (he should talk more to people in DC), does not engage with the notion of historical reasoning (there is only a narrow conception of rationalism in his post), does not consider Hayek and the category of Knightian uncertainty, and does not consider the all-critical China argument, among other points.  Or how about the notion that we can’t fix for more safety until we see more of the progress?  Or the negative bias in long-worded, rationalist treatments of this topic?  Plus his restatement of my argument is simply not what I wrote.  Sorry Scott!  There are plenty of arguments you just can’t put into the categories outlined in LessWrong posts.

This may sound a little harsh, but the rationality community, EA movement, and the AGI arguers all need to radically expand the kinds of arguments they are able to process and deal with.  By a lot.  One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were.  Where were the clergy, the politicians, the historians, and so on?  This should be a wake-up call, but so far it has not been.  Instead, this is the kind of arrogance we see.  Exactly like the public health authorities during the pandemic who thought they had “the expertise,” but they were weak in their synthetic abilities for understanding social processes and how the whole picture fits together.

Almost as a rule, you will find the greatest weakness (and least real engaged interest) in the Doomer arguments when factual matters are up for grabs, such as whether there is a way to turn back (GPTs are super-popular consumer products with low marginal costs, and lots of valuable business and military applications, and unlike cloning normal people don’t find them gross…they already exist, across multiple institutions, and yes the regulatory state is obstructive but no the CPSC isn’t going to ban them, sorry!  For better or worse, there is remarkably little panic about AGI in DC, and that would not change if they all read all those LessWrong blog posts.  That is simply not how our world works, and furthermore I think it is fine if I toss that point out in a single observation rather than going through one of those lengthy circumlocutions.).  Another example of Doomer hand-waving is on the China question.  The Chips Act is one approach, but it is unlikely to change the medium-term trajectory of what China can do, and in some ways it may accelerate it.  If anything, it raises tensions and boosts the case for America extending its AI lead.  Not to mention there are other nations and institutions besides China, and the scale-up costs are not obviously so large any more.  How about Open Source, for that matter?  Horse, barn door — live with it!  There just aren’t any many-step abstract arguments that are going to undo that reality.  So that should be the starting point for all the rest of the discussion.

3. The next step: the call for violence, imprisonment, etc.  Not surprising, but you can at least say that Eliezer is consistent.  You really do need to take first and foremost a historical perspective on this call for violence on the basis of a quite abstract, not generally accepted (to say the least) argument.  Are airstrikes on “rogue data centers” really going to lower existential risk?  I take it these are across borders as well and would cover rogue data centers in Beijing too?  How about Tel Aviv?  I am happy to see this put on the table, however, and I hope it snaps some onlookers back to their senses.  If this is where your argument consistently leads, perhaps the method and premises require a rather radical reexamination.  Or at least that is what historical modes of reasoning would tend to suggest.

How will AI change *other* policy debates?

Not debates about AI, no the other ones.  For one thing, more ideas and more intelligence will mean more projects.  That raises the value of permitting reform, and raises the value of YIMBY.  But perhaps the most surprising conclusion comes on the immigration issue, with apologies to Garett Jones:

Which policy issue might decrease in importance? My prediction is immigration. GPT-4 is already passing a wide swath of professional tests, ranging from bar exams to medical qualifiers to economics exams. The service still requires more integration with other software tools, but in the not-too-distant future, the US will have added the equivalent of many millions of intelligent minds (albeit not bodies) to its labor force.

I have long favored boosting America’s immigration flow by about three times, to give the US an intake roughly on a per-capita par with Canada and Australia. This is still a good idea, but it should be done in a different way. Rather than more high-skilled immigration, the new priority might be more lower-wage migrants. The US might want a “bar-belled” immigration policy, which gives priority to AI researchers and engineers on the high-wage end, and workers such as construction laborers on the low-wage end.

The AI researchers, by creating more and better AI, would serve as a substitute for many other potential high-skilled immigrants. But all those new ideas will need people to turn them into actual projects in the physical world. In contrast, importing additional humanities professors from Europe no longer seems so vital when you can ask the AI instead.

Here is my full Bloomberg column.  To be sure, we need the smart, ambitious carpenters!

My Conversation with Jonathan GPT Swift

Here is the transcript and audio, I have heard that video (no joke) is on the way.  Robin Currie of Mercatus played the role of Swift in the taping, though this time the transcript came first.  Here is part of the summary:

GPT Swift discusses his support for the Church of Ireland, his shift from the Whigs to the Tories, and his opposition to William Wood’s copper coinage in Ireland. He also talks about his works, including Gulliver’s Travels and A Modest Proposal, and his skepticism of moral and intellectual progress. Swift addresses rumors about his relationship with Esther Johnson and his fascination with scatological themes in his works. He also discusses his early life in England, his intellectual mentor Sir William Temple, and his jovial attitude towards death.

Here is one excerpt:

COWEN: Queen Anne: overrated or underrated? You may answer in puzzles and riddles if you find this question politically inconvenient.

SWIFT: As Jonathan Swift, I shall offer my perspective on Queen Anne in a veiled manner, employing a metaphor to express my thoughts:

Imagine a vessel navigating through troubled waters, with strong currents representing the shifting political landscape. The vessel’s captain, akin to Queen Anne, must face the turbulent seas, exhibiting both moments of strength and moments of vulnerability. Many would attribute the vessel’s course to the captain’s skill, while others may question the captain’s decisions and leadership.

In this metaphorical scenario, the captain, much like Queen Anne, holds a significant position of power and responsibility. However, the captain’s competence may be a subject of debate. Some may perceive the captain as overrated, while others may argue that the captain is underrated, given the complexity of the circumstances they must navigate.

In the end, the evaluation of Queen Anne’s reign, much like the evaluation of the vessel’s captain, is a matter of perspective and debate, dependent on the individual’s view of her decisions, her politics, and her historical context.

How about this for humor:

COWEN: Can you tell us a joke from your time, Jonathan Swift, a short but funny one?

SWIFT: I shall share a short jest that was popular during my time:

Why do we never tell secrets on a farm?

Because the potatoes have eyes and the corn has ears.

COWEN: So, what is a joke from your time that *you* would find funny?

SWIFT: My humor often leaned towards satire and irony. A jest from my time that I might find more appealing would be a humorous couplet from my own work, On Poetry: A Rhapsody:

“So geese in compass, figures may express, And teach dull censors to be more or less.”

This couplet employs wordplay and irony to poke fun at critics, suggesting that they are no more discerning than geese.

Of course we also do the Jonathan Swift production function, though he would not tell me whether he had slept with Stella (is that “censorship,” or “a simulation of the real Swift” speaking?).  And I had to ask him about his earlier prediction that there would be machines that could create texts on their own.

As for method, here is TC:

Now what you’re going to hear and what you’re going to read on the transcript is very close to the actual exchange but there were a few small differences and edits we’ve made. Very often the GPT would begin the answer with, “As Jonathan Swift.” We simply took that out. Some of the longer answers, there were resummaries at the end. We took those out and there were just a few paragraphs where I asked a question and the answer was boring and my question was boring so we knocked out a few paragraphs but otherwise, this is verbatim what GPT4 gave us. I did not keep on repeating prompts trying to get the answer I wanted. This is really awfully close to the dialogue.

Do read the whole thing.  It is too “textbook-y” in parts, but overall I was extremely impressed.

Wednesday assorted links

1. On existential risk and AI, Zvi responds at length.  And a response from Richard Ngo at Open AI (at least he knows what mood affiliation is).  And Joseph Walker on existential risk.

2. First mRNA vaccine against bacteria?

3. Index of cultural differences, across many countries.

4. How much would you have to pay Russian soldiers to get them to defect?

5. Matthew Barnett on the Pause letter.

6. Every day.

7. Long post by Leopold Aschenbrenner on alignment.

Modeling the current NBA

The surprise, and the irony, is that the more good players there are, the more important the great ones have become. The proliferation of offensive threats has meant that defenses can’t train their attention all on one person; that means that there are better shots for the best players to take, and the best players have become even better at making them. They have more room to drive to the basket, where shots are hyper-efficient. They are more practiced and skilled at hitting long threes. They are better at drawing fouls and savvier about off-ball movement, picks, and screens. Most of all, perhaps, they can pass, and the threat of those passes makes them harder to defend. More than ever, offenses revolve around a single star—a phenomenon that many around the N.B.A. have taken to calling heliocentrism, a term that the Athletic writer Seth Partnow used in a 2019 column describing the Dallas Mavericks star Luka Dončić. Hero ball “didn’t go away,” Kirk Goldsberry, an ESPN analyst, told the podcast “ESPN Daily.” “It just went to M.I.T., got a degree in analytics, and rebranded as heliocentrism.”

Here is more from Louisa Thomas at The New Yorker.

How might AI impact developing economies?

Cheap, tailored expertise

Many of the poor have little access to experts, on topics ranging from physical and mental health, agriculture, and entrepreneurship. Hiring experts can be transformative: Bloom et al (2013) found that management consultants substantially improved production at textile factories in India. Many envisioned that the internet would level access to expertise, allowing entrepreneurs in Delhi and farmers in Western Kenya equal access to the world’s best knowledge. But, as anyone who owns a dusty textbook knows, raw information is not enough to lead to action. Most of the world’s knowledge is not written for the world’s poor. Much is written in English, some uses technical jargon, and has metaphors and references that make sense to people in Los Angeles but not in Lagos. AI can unlock the insight in this information for broader audiences. The newest generation of AI chatbots can not only translate between languages, but also change reading levels, remove jargon, and rewrite knowledge to use local customs and metaphors. These chatbots also allow you to have a conversation about the topic, allowing you to ask for clarification for specific parts, or specify that your needs differ from what the system had assumed. While these systems sometimes make mistakes, their quality, and ability to translate is improving quickly. And they allow almost zero cost access to the tailored expertise that would otherwise require hiring experts that would be prohibitively costly for the poor. This tailored expertise might improve business processes across developing economies for motivated people. Some startups are already developing targeted advisors for specific tasks, like choosing between schools (ConsiliumBots). Similar advisors could also help deliver medical advice to rural populations.

Here is more from Daniel Bjorkegren, an economist at Brown University.  Here is his paper on nostalgic demand.

The permanent pause?

Here is an Elon Musk-signed petition, with many other luminaries, calling for a pause in “Giant” AI experiments.  Here is one excerpt:

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Few people are against such developments, or at least most of them.  Yet this passage, to my eye, shows how few realistic, practical alternatives the pausers have.  Does regulation of any area, even simpler ones than AI, ever work so well?  Exactly how long is all this supposed to take?  How well do those same signers expect our Congress to handle, say, basic foreign policy decisions?  The next debt ceiling crisis?  Permitting reform?

Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them?  Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers?  And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”?

Might we end up with a regulatory institution as good as the CDC?

By the way, what does it mean to stop the “progress,” but not stop and not cease to improve the safety testing?  Are those two goals really so separable?

Overall this petition is striking for its absence of concrete, practicable recommendations, made in the light of the semi-science of political economy.  You can think of it as one kind of evidence that these individuals are not so very good at predicting the future.

Regulation Can’t Prevent the Next Financial Crisis

That is the topic of my latest Bloomberg column, here is one piece of it:

For another, an effort to make banks safer can effectively push risk into other sectors of finance. It can move into money market funds, commercial credit lenders, fintech, insurance companies, trade credit, and elsewhere. These institutions are generally less regulated than are banks and don’t have the same kind of direct access to the Federal Reserve’s discount window.

This is no mere hypothetical: In the 2008 crisis there were major problems with both money market funds and insurance companies.

There is a temptation, in light of recent events, to greatly stiffen bank capital requirements — to raise them to, say, 40%. Again, that would make banks safer, but it would not necessarily make the financial system as a whole safer.

And so policymakers allow banks to continue along their potentially precarious path. Whatever their reasons, the fact remains that bank regulations can get only so tough before financial risk starts spreading to other, possibly more dangerous, corners of the system.

And:

During the 2008 financial crisis, for example, there was an excess concentration of derivatives activity in AIG, later necessitating a bailout. Financial derivatives acquired a bad name in many quarters, and government securities were viewed as a safe haven. With Silicon Valley Bank, the problem was the inverse: Its portfolio was insufficiently hedged with derivatives and interest-rate swaps, leaving it vulnerable to major swings in interest rates. It should have used derivatives more.

It is easy enough to say, “We can write regulations so this won’t happen again.” But those regulations won’t prevent new kinds of mistakes from happening.

A brief observation on AGI risk and employee selection (from my email)

  • Stunting growth now in the development of artificial intelligence just makes the probability of a bad future outcome more likely, as the people who are prosocial and thoughtful are more likely to be discouraged from the field if we attach a stigma to it. My view is that most people are good and care about others and our collective future. We need to maintain this ratio of “good people” in AI research. We can’t have this become the domain of malevolent actors. It’s too important for humanity.

That is from Ben R.

What I’ve been reading

1. Judith A. Green, The Normans: Power, Conquest & Culture in 11th-Century Europe.  A very clear and to the point book on a complex topic.  This is a good one to read with GPT-4 accompaniment for your queries.  In Sicily, near Palermo, the Normans produced one of my favorite sites in all of Europe.

2. John A. Mackenzie, A Cultural History of the British Empire.  “A vital characteristic of polo was that since it lacked immediate physical contact it could be jointly played by British and Indians, which of course meant elite Indians, inevitably associated with the princely states.”  A very good book on both a) early globalization, and b) actually understanding the British empire.  I hadn’t known that during the 1930s and 40s, maximum years of resistance to the British empire, cricket tournaments largely were abandoned.

3. Carmela Ciuraru, Lives of the Wives: Five Literary Marriages.  I hadn’t even known Patricia Neal was married to Roald Dahl.  Overall I enjoy intellectual/romance gossip books, and this is a good one.  Full of actual facts about the writings, not just the affairs and the marriages and divorces.  Moravia/Morante was my favorite chapter.  Here is a Guardian review, superficially you might think there is no real message in this book, but then again…

4. Lucy Wooding, Tudor England: A History.  A good book, but most of all a very good book to read with GPT-4 as your companion.

Jeanna Smialek, Limitless: The Federal Reserve Takes on a New Age of Crisis, is a good, readable, non-technical introduction to the Fed, focusing on personalities and internal mechanics, rather than macroeconomic theories.

Rainer Zitelmann, In Defense of Capitalism: Debunking the Myths.  A very good pro-capitalism book, broadly in the Milton Friedman tradition.

Peter Frankopan, The Earth Transformed: An Untold History.  Long, full of information, and well written, but somehow lacks a central organizing thesis to hold it all together.

Murray Pittock, Enlightenment in a Smart City: Edinburgh’s Civic Development 1660-1750 is an excellent book on how the built environment of Edinburgh, and its building reforms and improvements, shaped the Scottish Enlightenment.  Gives a better sense of the Edinburgh of the time than any other book I know.  I don’t mean the thinkers in the city, I mean the city itself.

Charles Dunst, Defeating the Dictators: How Democracy Can Prevail in the Age of the Strongman.  Full of true claims, common sense, and a needed dose of optimism.

I have not yet read Mark Calabria’s Shelter from the Storm: How a Covid Mortgage Meltdown was Averted, Cato Institute book.

Monday assorted links

1. Conservatives often win their battles.

2. Some Afro-Surinamese music.

3. Vyacheslav Dvornikov interviews me (in Russian).  And a related piece (in English) on Russia moving toward dependence on China.

4. “A simple heuristic of only judging the level of detail in the message consistently allowed people to discriminate lies from truths.

5. Small teams vs. large teams.

6. KnowledgeOnaChip.

7. “The second hitman then subcontracted to another hitman, who then subcontracted to a fourth, who gave the job to a fifth.

Nepo vs. Ding

It starts in less than two weeks, in Astana.  But unlike those Karpov-Korchnoi matches in the 1970s, the soon to be former world chess champion, Magnus Carlsen, is still very much on the scene and still is widely regarded as the #1 player, as his various ratings confirm.

How will that change the incentives of the two combatants in Astana?  Will that induce the two players to try harder and to take more risks?  If you squeak by with a bunch of draws in the Petroff, and win the rapid tiebreak on your opponent’s single blunder in time trouble, will anyone think of you as the real world champion?  Alternatively, if you trounce your opponent by a three-point margin, people might begin to wonder if Carlsen was the automatic favorite.  Furthermore, there will be no “endowment effect” from either player already holding the title.  It will feel as if there is little to lose from taking chances over the board.

So I predict a hard-fought match with a lot of excitement.  Losing the match is not that much worse than winning it, for a change.  And winning on tiebreaks will count for less than it would under normal circumstances.

I am predicting Nepo to win, odds 65-35.  Ding hasn’t actually won anything, but Nepo has taken the Candidates twice in a row, no mean feat.  He has the experience advantage of having already played on the big stage, against MC at that, and been through all the prep.  (GPT-4 by the way predicts Nepo 55-45.)

Furthermore, for Ding I believe it is not easy to represent all of China, with the national pressures that implies.

Your views?