An aggregate Bayesian approach to more (artificial) intelligence?

It is not disputed that current AI is bringing more intelligence into the world, with more to follow yet.  Of course not everyone believes that augmentation is a good thing, or will be a good thing if we remain on our current path.

To continue in aggregative terms, if you think “more intelligence” will be bad for humanity, which of the following views might you also hold?:

1. More stupidity will be good for humanity.

2. More cheap energy will be bad for humanity.

3. More land will be bad for humanity.

4. More people (“N”) will be bad for humanity.

5. More capital (“K”) will be bad for humanity.

6. More innovation (the Solow residual, the non-AI part) will be bad for humanity.

Interestingly, while there are many critics of generative AI, few defend the apparent converse about more stupidity, namely #1, that we should prefer it.

I am much more worried about #2 — more cheap energy — than I am about more generative AI.

I don’t know anyone worried about “too much land.”  Maybe the dolphins?

There have been many people in the past worried about #4 — too many people — but world population will be shrinking soon enough, so that is a moot point.

I do not hear that “more capital” will be bad for humanity.  As for innovation, the biggest innovation worriers seem to be the AI worriers, which brings us back to the original topic.

My general view is that if you are worried that more intelligence in the world will bring terrible outcomes, you should be at least as worried about too much cheap energy.  What exactly then is it you should want more of?

More land?  Maybe we should pave over more ocean, as the Netherlands has done, but check AI and cheap energy, which in turn ends up meaning limiting most subsequent innovation, doesn’t it?

If I don’t worry more about that scenario, it is only because I think it isn’t very likely.

If you worry about bringing too much intelligence into the world, I think you have to be a pretty big pessimist no matter what happens with AI.  How many other feasible augmentations can have positive social marginal products if intelligence does not?

Addendum: I have taken an aggregative approach.  You might think we need “more intelligence” and also “more AI,” but perhaps in different hands or at different times.  In contrast, I think we are remarkably fortunate to be facing the particular combination of parties and opportunities that stand before us today.

Comments

Comments for this post are closed