The permanent pause?

Here is an Elon Musk-signed petition, with many other luminaries, calling for a pause in “Giant” AI experiments.  Here is one excerpt:

In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Few people are against such developments, or at least most of them.  Yet this passage, to my eye, shows how few realistic, practical alternatives the pausers have.  Does regulation of any area, even simpler ones than AI, ever work so well?  Exactly how long is all this supposed to take?  How well do those same signers expect our Congress to handle, say, basic foreign policy decisions?  The next debt ceiling crisis?  Permitting reform?

Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them?  Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers?  And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”?

Might we end up with a regulatory institution as good as the CDC?

By the way, what does it mean to stop the “progress,” but not stop and not cease to improve the safety testing?  Are those two goals really so separable?

Overall this petition is striking for its absence of concrete, practicable recommendations, made in the light of the semi-science of political economy.  You can think of it as one kind of evidence that these individuals are not so very good at predicting the future.

Comments

Comments for this post are closed