Where the AI extinction warning goes wrong

There is so much to say about this one, in my view it has been counterproductive for all those worried about AI safety.  Here is one excerpt from my latest Bloomberg column:

Sometimes publicity stunts backfire. A case in point may be the one-sentence warning issued this week by the Center for AI Safety: “Mitigating  the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

…The first problem is the word “extinction.” Whether or not you think the current trajectory of AI systems poses an extinction risk — and I do not — the more you use that term, the more likely the matter will fall under the purview of the national security establishment. And its priority is to defeat foreign adversaries. The bureaucrats who staff the more mundane regulatory agencies will be shoved aside.

US national security experts are properly skeptical about the idea of an international agreement to limit AI systems, as they doubt anyone would be monitoring and sanctioning China, Russia or other states (even the UAE has a potentially powerful system on the way). So the more people say that AI systems can be super-powerful, the more national-security advisers will insist that US technology must always be superior. I happen to agree about the need for US dominance — but realize that this is an argument for accelerating AI research, not slowing it down.

A second problem with the statement is that many of the signers are important players in AI developments. So a common-sense objection might go like this: If you’re so concerned, why don’t you just stop working on AI? There is a perfectly legitimate response — you want to stay involved because you fear that if you leave, someone less responsible will be put in charge — but I am under no illusions that this argument would carry the day. As they say in politics, if you are explaining, you are losing.

The geographic distribution of the signatories will also create problems. Many of the best-known signers are on the West Coast, especially California and Seattle. There is a cluster from Toronto and a few from the UK, but the US Midwest and South are hardly represented. If I were a chief of staff to a member of Congress or political lobbyist, I would be wondering: Where are the community bankers? Where are the owners of auto dealerships? Why are so few states and House districts represented on the list?

I do not myself see the AI safety movement as a left-wing political project. But if all you knew about it was this document, you might conclude that it is. In short, the petition may be doing more to signal the weakness and narrowness of the movement than its strength.

Then there is the brevity of the statement itself. Perhaps this is a bold move, and it will help stimulate debate and generate ideas. But an alternative view is that the group could not agree on anything more. There is no accompanying white paper or set of policy recommendations. I praise the signers’ humility, but not their political instincts.

Again, consider the public as well as the political perception. If some well-known and very smart players in a given area think the world might end but make no recommendations about what to do about it, might you decide just to ignore them altogether? (“Get back to me when you’ve figured it out!”) What if a group of scientists announced that a large asteroid was headed toward Earth. I suspect they would have some very specific recommendations, on such issues as how to deflect the asteroid and prepare defenses.

Do read the whole thing.  You will note that my arguments do not require any particular view of AGI risk, one way or the other.  I view this statement as a mistake from all points of view, except perhaps for the accelerationists.

Comments

Comments for this post are closed