Eternal economic growth and Effective Altruism

Dwarkesh Patel surveys one angle of that debate in this short post, and also here.  More commonly, from EA types I increasingly hear the argument that if an economy grows at [fill in the blank] percent for so many thousands of years, at some point it becomes so massively large relative to the galaxy that it has to stop growing.  It is then concluded that economic growth is not so important, because all it does is help us arrive on the final frontier sooner, and be stuck there, rather than increasing net human well-being over time.  (Then often one hears the follow-up claim that existential risk should be prioritized over growth.)

I am not persuaded by that argument, and here are a few points in response:

1. Growth may involve dematerialization and greater energy efficiency, rather than eating up the galaxy’s resources.  Much of modern growth already has taken this form, with likely more to come.

2. Real gdp comparisons give you good information locally, when comparing relatively similar societies or states of affairs.  The numbers have much less meaning across very different world states, or very long spans of time with ongoing growth.  Comparing say Beverly Hills gdp per capita to Stone Age gdp per capita just isn’t an accurate numerical exercise period.  It is fair to say that the Beverly Hills number is “a whole lot more,” and much better, but I wouldn’t go too much further than that.  They are very different situations, rather than one being a mere exponential version of the other.  The economics literature on real income comparisons supports this take.

This point most decidedly does not prove that “eternal growth” is possible.  It does show that the “you can’t just keep on scaling up” argument against ongoing growth does not get off the ground.  It is really just asserting — without actual backing — that “society couldn’t be very different from it is right now.”  And note that points #1 and #2 are mirror images of each other.

3. In general I am not persuaded by backwards induction modes of moral reasoning.  The claim is that “in period x we will hit obstacle y, therefore let us reason backwards in time to the present day and conclude…”  Backwards induction does not in general hold, either practically or morally, especially across very long periods of time and when great uncertainty is present.  I am not saying backwards induction never holds, but most of these arguments are simply applying some form of moral backwards induction without justifying it.  A simpler and more accurate perspective is that the status quo already is highly uncertain, and we don’t have much of a workable sense of how things will run as we approach a “frontier,” or even exactly what that concept should mean.  And this point is not unrelated to #2.  We are being asked to draw conclusions about a world we cannot readily fathom.

4. The world is likely to end long before the binding growth frontier is reached, even assuming that concept has a clear meaning.  In the meantime, it is better to have higher economic growth.  This rejoinder really is super-simple.

3b. A super-nerdy response might be “are we sure we can’t just find more and more growth resources forever, especially if the final theory of physics is quite strange?”.  It is hard for me to judge that one, but I find #3 much more relevant than any version of this #3b.  Still, if we are going to look that far into the future, I don’t see any reason to rule out #3b.  Which will keep alive the expected value of future economic growth.

5. Often I am suspicious of the method of ‘sequential elimination” in moral reasoning.  It might run as follows: “I can show you that X doesn’t matter, therefore we are left with Y as the thing that matters.”  Somehow the speaker ought to take greater care to consider X and Y together, and to realize that all of the moral reasoning along the way is going to be imperfect.  The “ghost traces” of X may still continue to matter a great deal!  What if I argued the following?: “Pascal’s Wager arguments can be used to show that existential risk cannot be allowed to dominate our moral theories, therefore ongoing economic growth has to be the thing that matters.”  That too would be fallacious, and for similar reasons, even assuming you saw Pascal’s Wager-type arguments as something to be rejected.

A better approach would be “both X and Y are on the table here, and both X and Y seem to be really important.  What kinds of consiliences can we find where arguments for both X and Y work together in similar directions?  And that is where we should put our energies.  More concretely, that might include finding and mobilizing talent, building better institutions, and making sure we don’t end up controlled by a dominant China.

In sum, the case for sustainable economic growth is alive and well, and not at the expense of existential risk.

Comments

Comments for this post are closed