Category: Uncategorized
Dialogue between an economist and a physicist
Interesting, but I think highly flawed on both sides. Here is one excerpt from the physicist:
Physicist: True enough. So we would likely agree that energy growth will not continue indefinitely. But two points before we continue: First, I’ll just mention that energy growth has far outstripped population growth, so that per-capita energy use has surged dramatically over time—our energy lives today are far richer than those of our great-great-grandparents a century ago [economist nods]. So even if population stabilizes, we are accustomed to per-capita energy growth: total energy would have to continue growing to maintain such a trend [another nod].
Second, thermodynamic limits impose a cap to energy growth lest we cook ourselves. I’m not talking about global warming, CO2 build-up, etc. I’m talking about radiating the spent energy into space. I assume you’re happy to confine our conversation to Earth, foregoing the spectre of an exodus to space, colonizing planets, living the Star Trek life, etc…
At that 2.3% growth rate, we would be using energy at a rate corresponding to the total solar input striking Earth in a little over 400 years. We would consume something comparable to the entire sun in 1400 years from now. By 2500 years, we would use energy at the rate of the entire Milky Way galaxy—100 billion stars! I think you can see the absurdity of continued energy growth.
I think it is easy enough for the economist to argue that energy, at some margin, has diminishing returns for creating utility. So we then have dematerialized economic growth, not an ever-growing population (oscillation back and forth?), and thus we do not fry the planet, or for that matter the galaxy. A general lesson of national income statistics is that if you play out exponentials for long enough, over centuries you are simply talking about very different things, rather than a simple exponential growth of present conditions.
Don’t bet against the dollar
That is the topic of my latest Bloomberg column, here is one point of several:
The crypto revolution also seems to be heading in some dollar-friendly directions. Much recent crypto growth has come in the area of stablecoins, as evidenced by Stripe’s acquisition last week of Bridge. Most stablecoins are denominated in dollars, and typically they are backed by dollar-denominated securities, if only to avoid exchange-rate risk. If “programmable monies” have a future, which seems likely, that will further help the dominant currency — namely, the US dollar.
You might think that other monies will become programmable too. But since stablecoins often are most convenient for international transactions, as well as for internet-connected transactions, the most likely scenario is that stablecoins concentrate interest in the dollar. The US has by far the most influence of any nation over how the internet works.
The piece has other points of note.
The robustness of coal?
Coal consumption in 2030 is now estimated 6% higher than only a year ago. That may sound small, but it amounts to adding the equivalent of the consumption of Japan, the world’s fourth-largest coal burner. By 2030, the IEA now believes coal consumption will remain higher than it was back in 2010…
One notable statistic: Two-thirds of the total increase in energy demand in 2023 was met by fossil fuels, according to the IEA.
Here is more from Javier Blas at Bloomberg. Via Nicanor.
New report on nuclear risk
Phil Tetlock is part of the study, from the Forecasting Research Institute. Obviously this is very importnt. From Tetlock’s email to me:
“In brief, this study is the largest systematic survey of subject matter experts on the risk posed by nuclear weapons. Through a combination of expert interviews and surveys, 110 domain experts and 41 experienced forecasters predicted the likelihood of nuclear conflict, explained the mechanisms underlying their predictions, and forecasted the impact of specific tractable policies on the likelihood of nuclear catastrophe.
Key findings include:
- We asked experts about the probability of a nuclear catastrophe (defined as an event where nuclear weapons cause the death of at least 10 million people) by 2045, the centenary of the bombings of Hiroshima and Nagasaki. Experts assigned a median 4.5% probability of a nuclear catastrophe by 2045, while experienced forecasters put the probability at 1%.
a. Respondents thought that a nuclear conflict between Russia and NATO/USA was the adversarial domain most likely to be the cause of a nuclear catastrophe of this scale, however risk was dispersed relatively evenly among the other adversarial domains we asked about: China/USA, North Korea/South Korea, India/Pakistan, and Israel/Iran.
- We asked participants about their beliefs on the likely effectiveness of several policy options aimed at reducing the risk of a nuclear catastrophe. Two policies emerged as clear favorites for most participants: a crisis communications network and nuclear-armed states implementing failsafe reviews. The median expert thought that a crisis communications network would reduce the risk of a nuclear catastrophe by 25%, and failsafe reviews would reduce it by 20%.”
You will find the report here.
Assorted links
1. How much do homeowners dislike density?
2. The views of the American public on AI.
3. Centaur is an AI that mimics human behavior, rather trying to be as smart as possible.
4. Nicholas Decker on how to learn economics.
5. Ezra interviews Vivek (NYT).
The Health and Employment Effects of Employer Vaccination Mandates
Health care facilities considering mandating staff vaccination face a difficult tradeoff. While additional vaccination coverage will directly reduce disease transmission within the facility, the imposition of a mandate may also cause vaccine-hesitant staff to quit, which could harm patient care. To study this tradeoff, we leverage comprehensive administrative data covering virtually all US nursing homes, including payroll-based records on approximately 500 million daily nurse shifts and weekly data on COVID transmission and mortality at each facility. We use a difference-in-differences framework to estimate the impact of employer-imposed vaccine mandates at 581 nursing homes on disease spread, employment outcomes, and several patient care metrics. While mandates did slightly increase staff turnover, the effects were concentrated on staff working less than 20 hours per week, and resulted in a reduction of less than two minutes per patient-day. Furthermore, there is only limited evidence of lower levels of care at mandate facilities in typically-monitored conditions such as patient falls, pressure ulcers, or urinary tract infections. In contrast, implementing a vaccine mandate led to large increases in staff vaccinations at mandate facilities, which directly led to less transmission of and lower patient mortality from COVID. We estimate that vaccine mandates saved one patient life for every two facilities that enacted a mandate, a large effect given the typical facility has around 100 beds. Our results suggest that the health benefits of mandates far outweigh the costs in terms of reduced patient care from staff turnover.
Yup. For some of you, it is time to read it and weep. Here is the full paper by
Apologies, the great Frank Fukuyama still lives!
Good.
Monday assorted links
Apologies, the great Frank Fukayama still lives!
Good!
Effective Altruists and finance theory
One of the most admirable and impressive things about the EA movement is how many people in it will avidly learn about other areas. Whether it be animal welfare, mosquite bed nets, asteroid risk, or the properties of various AI programs, you can find numerous EAs who really have gone out of their way to master many of the details.
They don’t quite acquire expert knowledge, but due to their general facility in the application of reason, often they can outargue the experts themselves.
Yet one thing I have never met — ever — or seen on Twitter, is an EA who understands finance at a comparable level. Never.
And that is odd, because EAs so stress the import of probabilistic thinking.
If you pose the “have you thought through being short the market?” question, one hears a variety of answers that are what I call “first-order wrong.” That is, there may well be more sophisticated defenses of those points of view, but you just hear the first-order response, designed to dispose of the question without much further thought. A few of those responses are:
1. “Why should I have to gamble?” (Given your other views, it is hedging not gambling)
2. “There is already evidence I am right. My friends and I made a lot of money buying Nvidia stock.”
3. “I don’t know how to short the market.” Or “Amateur investors shoulnd’t short the market!”
4. “Did the stock market predict Hitler and WWII?”
5. “How possibly can I cash in if the world ends very suddenly? After all, the AGI has an incentive to deceive us.”
6. “But I don’t know when the world is going to end!”
7. “Why should I short the market when I can earn so much more going long on Nvidia!?”
8. “Well, I am not buying stocks!”
9. “If the world is ending soon, what do I need money for?”
10. “But if the world doesn’t end, things will be really great.”
And more. (I’ve even heard “Are you short the market?”) I will leave it as an exercise to the reader to work out what is wrong with these responses. In most cases o1 and Claude can come to your aid, if needed.
I do believe that Aella, for one, is in essence short the market. Good for her, as she is also pessimistic about AI. But here are two responses I have never ever heard, not once:
11. “I’m going to sit down and study finance and see if I can find a feasible way to short the market. If I can’t I will feel sad, but I might get back to you for further guidance.”
12. “Soon enough, AI will be good enough to tell me how to short the market intelligently. Then I am going to do this — thanks for the tip! ”
Nope never. The absence of the last one from the discourse I find especially odd. “AGI will be powerful enough to destroy us, but not good enough to help me do an effective short!” OK…
The sociology here is more indicative of what is going on than the arguments themselves. Because the EAs, rationality types, and doomsters here generally are very good at learning new things.
Of course, once shorting the market even enters serious contemplation (never mind actually doing it), you also start seeing current market prices as a kind of testing referendum on various doomster predictions. And suffice to say, market prices basically offer zero support for all of those predictions. And that is embarrassing, whether you should end up shorting the market or not. Many EAs and rationality types are also fans of prediction markets in other contexts.
I nonetheless would urge many EA, rationality, and AI doomster types to learn more basic finance. It can liberate you from various mental chains, and it will be useful for the rest of your life, no matter how long or short that may be.
Addendum: So, so many fallacies in the comments. Here is one brief response I wrote: “Just keep on buying puts with a small pct. of your wealth. You don’t have to use leverage, though of course a real pessimist should. What is hard about shorting is that the world isn’t in fact going to end! You are smuggling in categories from very different contexts. And none of this requires anything remotely like a “strong version of market efficiency.” It does require that the end of world is bearish for prices at some point! [once people recognize doom might be coming, not when doom finally arrives]”
*Conclave*
I would say this was a good not great movie, but I pass along word because it is rare to have a movie so exclusively devoted to both public choice and social choice theory, and realistically so. (Thomas Reese said in an interview that the details on the conclave were pretty realistic too; if you don’t know of Reese his book Inside the Vatican is perhaps the best book on bureaucracy ever.)
I cannot say much more without spoiling the plot. Needless to say, Richard McKelvey would not have walked away from this one feeling refuted…
The film also takes itself seriously in a good way, which these days in Hollywood is increasingly rare.
Sunday assorted links
1. An eighth grade test from 1912.
2. “Create an AI agent with a crypto wallet (and optional X account) in less than 3 minutes”
3. Robot chips away at one pillar of the cost disease.
4. Was the Trans-Siberian railroad the greatest mega-project of all time?
5. About twenty percent of U.S: jobs heavy or very heavy strength. Not irrelevant to the current AI debates.
6. Roon speaks.
7. New bit of a Chopin waltz! (NYT)
What should I ask Paula Byrne?
Paula Jayne Byrne, Lady Bate…is a British biographer, novelist, and literary critic.
Byrne has a PhD in English literature from the University of Liverpool, where she also studied for her MA, having completed a BA in English and Theology at West Sussex Institute of Higher Education (now Chichester University).
Byrne is the founder and chief executive of a small charitable foundation, ReLit: The Bibliotherapy Foundation, dedicated to the promotion of literature as a complementary therapy in the toolkit of medical practitioners dealing with stress, anxiety and other mental health conditions. She is also a practicing psychotherapist, specializing in couples and family counseling.
Byrne, who is from a large working-class Roman Catholic family in Birkenhead, is married to Sir Jonathan Bate, Shakespeare scholar and former Provost of Worcester College, Oxford
Her books cover Jane Austen, Mary Robinson, Evelyn Waugh, Barbara Pym, JFK’s sister, two novels, and her latest is a study of Thomas Hardy’s women, both in his life and in his fiction, namely Hardy’s Women: Mother, Sister, Wives, Muses. Here is her home page. Here is Paula on Twitter.
Does the O-Ring model hold for AIs?
Let’s say you have a production process, and the AIs involved operate at IQ = 160, and the humans operate at IQ = 120. The O-Ring model, as you may know, predicts you end up with a productivity akin to IQ = 120. The model, in short, says a production process is no better than its weakest link.
More concretely, it could be the case that the superior insights of the smarter AIs are lost on the people they need to work with. Or overall reliability is lowered by the humans in the production chain. This latter problem is especially important when there is complementarity in the production function, namely that each part has to work well for the whole to work. Many safety problems have that structure.
The overall productivity may end up at a somewhat higher level than IQ = 120, if only because the AIs will work long hours very cheaply. Still, the quality of the final product may be closer to IQ = 120 than you might have wished.
This is another reason why I think AI productivity will spread in the world only slowly.
Sometimes when I read AI commentators I feel they are imagining production processes of AIs only. Eventually, but I do not see that state of affairs as coming anytime soon, if only for legal and regulatory reasons.
Furthermore, those AIs might have some other shortcomings, IQ aside. And an O-Ring logic could apply to those qualities as well, even within the circle of AIs themselves. So if say Claude and the o1 model “work together,” you might end up with the worst of both worlds rather than the best.
Saturday assorted links
1. Samuel Siskind, Awake, he is a 17-year-old composer. And Les Aunties, female vocalists from Chad.
3. Emory University invests in Bitcoin.
4. We can terraform the American West. By Casey Handmer, I am a big fan of this approach and think it is far more promising than Mars exploration.
5. Alan Heston has passed away, RIP.
6. Treasure hunters compensated for unearthed coins, supply is elastic edition.