The permanent pause?
Here is an Elon Musk-signed petition, with many other luminaries, calling for a pause in “Giant” AI experiments. Here is one excerpt:
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Few people are against such developments, or at least most of them. Yet this passage, to my eye, shows how few realistic, practical alternatives the pausers have. Does regulation of any area, even simpler ones than AI, ever work so well? Exactly how long is all this supposed to take? How well do those same signers expect our Congress to handle, say, basic foreign policy decisions? The next debt ceiling crisis? Permitting reform?
Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them? Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers? And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”?
Might we end up with a regulatory institution as good as the CDC?
By the way, what does it mean to stop the “progress,” but not stop and not cease to improve the safety testing? Are those two goals really so separable?
Overall this petition is striking for its absence of concrete, practicable recommendations, made in the light of the semi-science of political economy. You can think of it as one kind of evidence that these individuals are not so very good at predicting the future.
Regulation Can’t Prevent the Next Financial Crisis
That is the topic of my latest Bloomberg column, here is one piece of it:
For another, an effort to make banks safer can effectively push risk into other sectors of finance. It can move into money market funds, commercial credit lenders, fintech, insurance companies, trade credit, and elsewhere. These institutions are generally less regulated than are banks and don’t have the same kind of direct access to the Federal Reserve’s discount window.
This is no mere hypothetical: In the 2008 crisis there were major problems with both money market funds and insurance companies.
There is a temptation, in light of recent events, to greatly stiffen bank capital requirements — to raise them to, say, 40%. Again, that would make banks safer, but it would not necessarily make the financial system as a whole safer.
And so policymakers allow banks to continue along their potentially precarious path. Whatever their reasons, the fact remains that bank regulations can get only so tough before financial risk starts spreading to other, possibly more dangerous, corners of the system.
And:
During the 2008 financial crisis, for example, there was an excess concentration of derivatives activity in AIG, later necessitating a bailout. Financial derivatives acquired a bad name in many quarters, and government securities were viewed as a safe haven. With Silicon Valley Bank, the problem was the inverse: Its portfolio was insufficiently hedged with derivatives and interest-rate swaps, leaving it vulnerable to major swings in interest rates. It should have used derivatives more.
It is easy enough to say, “We can write regulations so this won’t happen again.” But those regulations won’t prevent new kinds of mistakes from happening.
Tuesday assorted links
1. Some concrete thoughts on AI regulation (am not convinced, but goes way beyond most discussions).
2. One good prompt for GPT. And Richard Ngo AI predictions.
3. A Toby Ord calculation on the odds of being electorally decisive.
4. The economics of pay transparency.
5. More on the import of LLaMA, and will Siri make a comeback?
America’s Zero-Sum Economics Doesn’t Add Up
Adam Posen has an excellent piece in Foreign Policy:
Beginning with the Trump administration, and accelerating under the Biden administration, U.S. trade and industrial policy has prioritized relocating manufacturing production back to the United States. For all their differences, both administrations disregarded other countries in this pursuit. Both also attacked international trade and investment as harmful to U.S. economic and national security, even though the rules for that very system were established by the United States and serve its interests. Along with members of Congress from both parties, the Biden administration has sought to take away production from others in a zero-sum way—explicitly from China and a bit more courteously from others.
This policy approach, while having considerable popular appeal at home, is based on four profound analytic fallacies: that self-dealing is smart; that self-sufficiency is attainable; that more subsidies are better; and that local production is what matters. Each of these assumptions is contradicted by more than two centuries of well-researched history of foreign economic policies and their effects.
The US has benefitted from leading a rules based system of global trade but it is throwing the rules away to go after individual countries on a one-on-one basis.
In big-league sports, the best job is to be league commissioner. As commissioner, you make money whichever team wins or loses on a given day, you are welcome at every stadium (even if occasionally booed), and you can ultimately decide the big questions of how the game is played and who is allowed to own a team. If you instead become identified with a single team, sometimes you win, sometimes you lose, but most importantly, others have an interest in your losing. You might even get repeatedly punished for cheating, instead of being the one to decide who is cheating.
Buy American doesn’t work.
The idea of “Buy American” has broad populist appeal. It connotes an economy that is self- sufficient, producing all it needs, and “putting American workers first.” Yet detailed research has repeatedly shown that policies aimed at maximizing domestic manufacturing employment rather than the development and adoption of new technologies are not only doomed to fail but crowd out the very industrial and trade policies that contribute the most to innovation, national security, and decarbonization.
The US should bet on rules and growth.
At its core, a successful U.S. industrial policy is one that promotes the widespread diffusion and adoption of the best technologies, even if that means the United States purchasing them from production located abroad. Innovation and technical progress are accelerated by having common standards at global scale, not by politically captured industries with barriers to entry. This approach is especially necessary for decarbonization but also to increase supply chain resilience and the ability of other countries to stand up to Chinese threats.
Read the whole thing.
A brief observation on AGI risk and employee selection (from my email)
- Stunting growth now in the development of artificial intelligence just makes the probability of a bad future outcome more likely, as the people who are prosocial and thoughtful are more likely to be discouraged from the field if we attach a stigma to it. My view is that most people are good and care about others and our collective future. We need to maintain this ratio of “good people” in AI research. We can’t have this become the domain of malevolent actors. It’s too important for humanity.
That is from Ben R.
What I’ve been reading
1. Judith A. Green, The Normans: Power, Conquest & Culture in 11th-Century Europe. A very clear and to the point book on a complex topic. This is a good one to read with GPT-4 accompaniment for your queries. In Sicily, near Palermo, the Normans produced one of my favorite sites in all of Europe.
2. John A. Mackenzie, A Cultural History of the British Empire. “A vital characteristic of polo was that since it lacked immediate physical contact it could be jointly played by British and Indians, which of course meant elite Indians, inevitably associated with the princely states.” A very good book on both a) early globalization, and b) actually understanding the British empire. I hadn’t known that during the 1930s and 40s, maximum years of resistance to the British empire, cricket tournaments largely were abandoned.
3. Carmela Ciuraru, Lives of the Wives: Five Literary Marriages. I hadn’t even known Patricia Neal was married to Roald Dahl. Overall I enjoy intellectual/romance gossip books, and this is a good one. Full of actual facts about the writings, not just the affairs and the marriages and divorces. Moravia/Morante was my favorite chapter. Here is a Guardian review, superficially you might think there is no real message in this book, but then again…
4. Lucy Wooding, Tudor England: A History. A good book, but most of all a very good book to read with GPT-4 as your companion.
Jeanna Smialek, Limitless: The Federal Reserve Takes on a New Age of Crisis, is a good, readable, non-technical introduction to the Fed, focusing on personalities and internal mechanics, rather than macroeconomic theories.
Rainer Zitelmann, In Defense of Capitalism: Debunking the Myths. A very good pro-capitalism book, broadly in the Milton Friedman tradition.
Peter Frankopan, The Earth Transformed: An Untold History. Long, full of information, and well written, but somehow lacks a central organizing thesis to hold it all together.
Murray Pittock, Enlightenment in a Smart City: Edinburgh’s Civic Development 1660-1750 is an excellent book on how the built environment of Edinburgh, and its building reforms and improvements, shaped the Scottish Enlightenment. Gives a better sense of the Edinburgh of the time than any other book I know. I don’t mean the thinkers in the city, I mean the city itself.
Charles Dunst, Defeating the Dictators: How Democracy Can Prevail in the Age of the Strongman. Full of true claims, common sense, and a needed dose of optimism.
I have not yet read Mark Calabria’s Shelter from the Storm: How a Covid Mortgage Meltdown was Averted, Cato Institute book.
Monday assorted links
Nepo vs. Ding
It starts in less than two weeks, in Astana. But unlike those Karpov-Korchnoi matches in the 1970s, the soon to be former world chess champion, Magnus Carlsen, is still very much on the scene and still is widely regarded as the #1 player, as his various ratings confirm.
How will that change the incentives of the two combatants in Astana? Will that induce the two players to try harder and to take more risks? If you squeak by with a bunch of draws in the Petroff, and win the rapid tiebreak on your opponent’s single blunder in time trouble, will anyone think of you as the real world champion? Alternatively, if you trounce your opponent by a three-point margin, people might begin to wonder if Carlsen was the automatic favorite. Furthermore, there will be no “endowment effect” from either player already holding the title. It will feel as if there is little to lose from taking chances over the board.
So I predict a hard-fought match with a lot of excitement. Losing the match is not that much worse than winning it, for a change. And winning on tiebreaks will count for less than it would under normal circumstances.
I am predicting Nepo to win, odds 65-35. Ding hasn’t actually won anything, but Nepo has taken the Candidates twice in a row, no mean feat. He has the experience advantage of having already played on the big stage, against MC at that, and been through all the prep. (GPT-4 by the way predicts Nepo 55-45.)
Furthermore, for Ding I believe it is not easy to represent all of China, with the national pressures that implies.
Your views?
The Public Choice Outreach Conference!
The Public Choice Outreach Conference is June 9-11 in Arlington, VA near Washington, DC. Please apply (it’s free) and please encourage your students to apply. More details in the flyer (pdf).

Existential risk, AI, and the inevitable turn in human history
In several of my books and many of my talks, I take great care to spell out just how special recent times have been, for most Americans at least. For my entire life, and a bit more, there have been two essential features of the basic landscape:
1. American hegemony over much of the world, and relative physical safety for Americans.
2. An absence of truly radical technological change.
Unless you are very old, old enough to have taken in some of WWII, or were drafted into Korea or Vietnam, probably those features describe your entire life as well.
In other words, virtually all of us have been living in a bubble “outside of history.”
Now, circa 2023, at least one of those assumptions is going to unravel, namely #2. AI represents a truly major, transformational technological advance. Biomedicine might too, but for this post I’ll stick to the AI topic, as I wish to consider existential risk.
#1 might unravel soon as well, depending how Ukraine and Taiwan fare. It is fair to say we don’t know, nonetheless #1 also is under increasing strain.
Hardly anyone you know, including yourself, is prepared to live in actual “moving” history. It will panic many of us, disorient the rest of us, and cause great upheavals in our fortunes, both good and bad. In my view the good will considerably outweigh the bad (at least from losing #2, not #1), but I do understand that the absolute quantity of the bad disruptions will be high.
I am reminded of the advent of the printing press, after Gutenberg. Of course the press brought an immense amount of good, enabling the scientific and industrial revolutions, among many other benefits. But it also created writings by Lenin, Hitler, and Mao’s Red Book. It is a moot point whether you can “blame” those on the printing press, nonetheless the press brought (in combination with some other innovations) a remarkable amount of true, moving history. How about the Wars of Religion and the bloody 17th century to boot? Still, if you were redoing world history you would take the printing press in a heartbeat. Who needs poverty, squalor, and recurrences of Ghenghis Khan-like figures?
But since we are not used to living in moving history, and indeed most of us are psychologically unable to truly imagine living in moving history, all these new AI developments pose a great conundrum. We don’t know how to respond psychologically, or for that matter substantively. And just about all of the responses I am seeing I interpret as “copes,” whether from the optimists, the pessimists, or the extreme pessimists (e.g., Eliezer). No matter how positive or negative the overall calculus of cost and benefit, AI is very likely to overturn most of our apple carts, most of all for the so-called chattering classes.
The reality is that no one at the beginning of the printing press had any real idea of the changes it would bring. No one at the beginning of the fossil fuel era had much of an idea of the changes it would bring. No one is good at predicting the longer-term or even medium-term outcomes of these radical technological changes (we can do the short term, albeit imperfectly). No one. Not you, not Eliezer, not Sam Altman, and not your next door neighbor.
How well did people predict the final impacts of the printing press? How well did people predict the final impacts of fire? We even have an expression “playing with fire.” Yet it is, on net, a good thing we proceeded with the deployment of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?”).
So when people predict a high degree of existential risk from AGI, I don’t actually think “arguing back” on their chosen terms is the correct response. Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely. Nonetheless I am still for people doing constructive work on the problem of alignment, just as we do with all other technologies, to improve them. I have even funded some of this work through Emergent Ventures.
I am a bit distressed each time I read an account of a person “arguing himself” or “arguing herself” into existential risk from AI being a major concern. No one can foresee those futures! Once you keep up the arguing, you also are talking yourself into an illusion of predictability. Since it is easier to destroy than create, once you start considering the future in a tabula rasa way, the longer you talk about it, the more pessimistic you will become. It will be harder and harder to see how everything hangs together, whereas the argument that destruction is imminent is easy by comparison. The case for destruction is so much more readily articulable — “boom!” Yet at some point your inner Hayekian (Popperian?) has to take over and pull you away from those concerns. (Especially when you hear a nine-part argument based upon eight new conceptual categories that were first discussed on LessWrong eleven years ago.) Existential risk from AI is indeed a distant possibility, just like every other future you might be trying to imagine. All the possibilities are distant, I cannot stress that enough. The mere fact that AGI risk can be put on a par with those other also distant possibilities simply should not impress you very much.
Given this radical uncertainty, you still might ask whether we should halt or slow down AI advances. “Would you step into a plane if you had radical uncertainty as to whether it could land safely?” I hear some of you saying.
I would put it this way. Our previous stasis, as represented by my #1 and #2, is going to end anyway. We are going to face that radical uncertainty anyway. And probably pretty soon. So there is no “ongoing stasis” option on the table.
I find this reframing helps me come to terms with current AI developments. The question is no longer “go ahead?” but rather “given that we are going ahead with something (if only chaos) and leaving the stasis anyway, do we at least get something for our trouble?” And believe me, if we do nothing yes we will re-enter living history and quite possibly get nothing in return for our trouble.
With AI, do we get positives? Absolutely, there can be immense benefits from making intelligence more freely available. It also can help us deal with other existential risks. Importantly, AI offers the potential promise of extending American hegemony just a bit more, a factor of critical importance, as Americans are right now the AI leaders. And should we wait, and get a “more Chinese” version of the alignment problem? I just don’t see the case for that, and no I really don’t think any international cooperation options are on the table. We can’t even resurrect WTO or make the UN work or stop the Ukraine war.
Besides, what kind of civilization is it that turns away from the challenge of dealing with more…intelligence? That has not the self-confidence to confidently confront a big dose of more intelligence? Dare I wonder if such societies might not perish under their current watch, with or without AI? Do you really want to press the button, giving us that kind of American civilization?
So we should take the plunge. If someone is obsessively arguing about the details of AI technology today, and the arguments on LessWrong from eleven years ago, they won’t see this. Don’t be suckered into taking their bait. The longer a historical perspective you take, the more obvious this point will be. We should take the plunge. We already have taken the plunge. We designed/tolerated our decentralized society so we could take the plunge.
See you all on the other side.
(Younger) Spinoza

Côte d’Ivoire claim of the day
Côte d’Ivoire citizens pay the highest income taxes in the world according to this year’s survey findings by World Population Review.
While both its sales and corporate tax regimes may be considerably lower than those of other countries globally, at 60%, Côte d’Ivoire’s income tax rates are markedly higher compared to developed countries.
Only Finland (56.95%), Japan (55.97%), Denmark (55.90%), and Austria (55%), closely follow Côte d’Ivoire to round up the top five countries with the highest income tax, in a study that surveyed over 150 countries.
How much people pay of course is yet another matter. Here is the link, via Jodi Ettenberg.
Sunday assorted links
1. “I asked GPT4 to perform a risk assessment on Silicon Valley Bank...”
2. New four-part documentary on chess, I have not seen it but it does appear serious.
3. An economic argument against non-compete clauses, one step deeper than the usual.
4. Japan could do much more with geothermal power (NYT).
6. Saving the dog.
Lifespans of the European Elite, 800-1800
I analyze the adult age at death of 115,650 European nobles from 800 to 1800. Longevity began increasing long before 1800 and the Industrial Revolution, with marked increases around 1400 and again around 1650. Declines in violent deaths from battle contributed to some of this increase, but the majority must reflect other changes in individual behavior. There are historic spatial contours to European elite mortality; Northwest Europe achieved greater adult lifespans than the rest of Europe even by 1000 AD.
Here is the paper by Neil Cummins, via Matt Yglesias.
New Emergent Ventures winners, 25th cohort
Duncan McClements, 17, incoming at King’s College Cambridge, economics, general career and research support.
Jasmine Wang and team (Jasmine is a repeat winner), Trellis, AI and the book.
Sophia Brown, Berlin/Brooklyn, to study the State Department,and general career development.
Robert Tolan, western Ireland, farmer and math Olympiad winner, YIMBY by street for Ireland.
Conor Durkin, Chicago, to write a Chicago city Substack.
Guido Putignano, Milan/Zurich, to do a summer internship in computation bio for cell therapies, at Harvard/MIT.
Michelle K. Huang, to revitalize Japanese real estate and to enable a creative community in Japan, near Kyoto.
Rasheed Griffith, repeat winner, to found a Caribbean think tank.
The Fitzwilliam, a periodical of ideas, Ireland. To expand and built it out, Fergus McCullough and Sam Enright, both repeat winners.
Lyn Stoler, Los Angeles, general career development and to develop material for a new pro-growth, pro-green agenda for states and localities.
Gwen Lester, Chicago, to develop a center for abused, battered, and sexually abused women, namely GLC Empowerment Center, also known as Nana’s House.
Sabrina Singh, Ontario, pre-college, to help her study of neurotechnology.
And Emergent Ventures Ukraine:
Isa Hasenko, eastern Ukraine, medical care for eastern Ukraine, performed by a system of digital information, using a real-time tracking system, to trace every allocation. He works with Fintable.io and MissionKharkiv.com.
Stephan Hosedlo, Lviv, to expand his company selling farm products and herbal products, and to buy a tractor.
Olesya Drashkaba, Kyiv, Sunseed Art, a company to market Ukrainian art posters around the world.
Peter Chernyshov, Edinburgh, mathematician, to run math education project — Kontora Pi — to teach advanced math for talented kids and school teachers in Ukraine. To produce more math videos and to recruit more teachers around Ukraine.
Andrew Solovei, western Ukraine, to build out a network to compensate small scale Ukrainian volunteers in a scalable and verifiable manner.
Olena Skyrta, Kyiv, to start a for-profit that will tie new scientific innovations to Ukrainian and other businesses.
Yevheniia Vidishcheva, Kyiv, theatrical project to travel around Ukraine.
Alina Beskrovna, Mariupol and Harvard Kennedy School, general career support and to work on the economic reconstruction of Ukraine.