My summary views on AI existential risk

That is the topic of my latest Bloomberg column, written and edited by the way before…all that stuff happened at Open AI.  Here is one excerpt:

First, I view AI as more likely to lower than to raise net existential risks. Humankind faces numerous existential risks already. We need better science to limit those risks, and strong AI capabilities are one way to improve science. Our default path, without AI, is hardly comforting.

The above-cited risks may not kill each and every human, but they could deal civilization as we know it a decisive blow. China or some other hostile power attaining super-powerful AI before the US does is yet another risk, not quite existential but worth avoiding, especially for Americans.

It is true that AI may help terrorists create a bioweapon, but thanks to the internet that is already a major worry. AI may help us develop defenses and cures against those pathogens. We don’t have a scientific way of measuring whether aggregate risk goes up or down with AI, but I will opt for a world with more intelligence and science rather than less.

As for the corporate issues, I am hoping for a good resolution…



Add Comment