Thursday assorted links

1. More Leopold on alignment.  Leopold continues to be good on this topic.

2. I think gated, but here is Scott Alexander on my AI argument.  I am a big fan of Scott’s, but this is a gross misrepresentation of what I wrote.  Scott ignores my critical point that this is all happening anyway (he should talk more to people in DC), does not engage with the notion of historical reasoning (there is only a narrow conception of rationalism in his post), does not consider Hayek and the category of Knightian uncertainty, and does not consider the all-critical China argument, among other points.  Or how about the notion that we can’t fix for more safety until we see more of the progress?  Or the negative bias in long-worded, rationalist treatments of this topic?  Plus his restatement of my argument is simply not what I wrote.  Sorry Scott!  There are plenty of arguments you just can’t put into the categories outlined in LessWrong posts.

This may sound a little harsh, but the rationality community, EA movement, and the AGI arguers all need to radically expand the kinds of arguments they are able to process and deal with.  By a lot.  One of the most striking features of the “six-month Pause” plea was how intellectually limited and non-diverse — across fields — the signers were.  Where were the clergy, the politicians, the historians, and so on?  This should be a wake-up call, but so far it has not been.  Instead, this is the kind of arrogance we see.  Exactly like the public health authorities during the pandemic who thought they had “the expertise,” but they were weak in their synthetic abilities for understanding social processes and how the whole picture fits together.

Almost as a rule, you will find the greatest weakness (and least real engaged interest) in the Doomer arguments when factual matters are up for grabs, such as whether there is a way to turn back (GPTs are super-popular consumer products with low marginal costs, and lots of valuable business and military applications, and unlike cloning normal people don’t find them gross…they already exist, across multiple institutions, and yes the regulatory state is obstructive but no the CPSC isn’t going to ban them, sorry!  For better or worse, there is remarkably little panic about AGI in DC, and that would not change if they all read all those LessWrong blog posts.  That is simply not how our world works, and furthermore I think it is fine if I toss that point out in a single observation rather than going through one of those lengthy circumlocutions.).  Another example of Doomer hand-waving is on the China question.  The Chips Act is one approach, but it is unlikely to change the medium-term trajectory of what China can do, and in some ways it may accelerate it.  If anything, it raises tensions and boosts the case for America extending its AI lead.  Not to mention there are other nations and institutions besides China, and the scale-up costs are not obviously so large any more.  How about Open Source, for that matter?  Horse, barn door — live with it!  There just aren’t any many-step abstract arguments that are going to undo that reality.  So that should be the starting point for all the rest of the discussion.

3. The next step: the call for violence, imprisonment, etc.  Not surprising, but you can at least say that Eliezer is consistent.  You really do need to take first and foremost a historical perspective on this call for violence on the basis of a quite abstract, not generally accepted (to say the least) argument.  Are airstrikes on “rogue data centers” really going to lower existential risk?  I take it these are across borders as well and would cover rogue data centers in Beijing too?  How about Tel Aviv?  I am happy to see this put on the table, however, and I hope it snaps some onlookers back to their senses.  If this is where your argument consistently leads, perhaps the method and premises require a rather radical reexamination.  Or at least that is what historical modes of reasoning would tend to suggest.

Comments

Comments for this post are closed