Existential risk and growth
Here is the abstract of a new paper by Leopold Aschenbrenner:
Technological innovation can create or mitigate risks of catastrophes—such as nuclear war, extreme climate change, or powerful artificial intelligence run amok—that could imperil human civilization. What is the relationship between economic growth and these existential risks? In a model of endogenous and directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted Ushape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend much on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity’s survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity. Nevertheless, if the scale effect of existential risk is large and the returns to research diminish rapidly, it may be impossible to avert an eventual existential catastrophe.
Bravo! 44 pp. of brilliant text, another 40 pp. of proofs and derivations, and rumor has it that Leopold is only 17 years old, give or take.
If you happen to know Leopold, please do ask him to drop me a line.
For the pointer I thank Pablo Stafforini.