AGI risk and Austrian subjectivism

I have been thinking lately that my skepticism about AGI risk in part stems from my background in Austrian economics, in particular the subjectivist approach of the Austrian school.  I’ve long found subjectivism the least fruitful of the Austrian insights, useful primarily insofar as it corresponds to common sense, but oversold by the Austrians themselves.  That said, early influences still shape one’s thinking, and in this case I think it is for the better.

Unlike some skeptics, I am plenty optimistic about the positive capabilities of AI.  I just don’t think it will ever acquire an “internal” voice, or a subjective sense as the Austrian economists understand the idea.  A lot of the AGI worriers seem to be “behaviorists in all but name.”  For them, if an AI can do smart things, it is therefore smart.  I, in turn, would stress the huge remaining differences between very capable AIs and self-conscious entities such as humans, dogs, and octopuses.

We (at least I) do not understand how consciousness arose or evolved, or for some this will be a theological point.  But I see zero evidence that AI is converging upon a consciousness-producing path.  That is one reason (not the only one, to be clear) why I do not expect a super-powerful AI to wake up one morning and decide to do us all in.

I definitely worry about AI alignment, just as I worry about whether my car brakes will work properly on a slippery road.  Or how I worry about all those power blackouts in Pakistan.  A lot of human-built entities do not perform perfectly, to say the least.  And the lack of transparency in AI operation will mean a lot of non-transparent failures with AI as well.  I thus would put an AI in charge of a military drone swarm but not the nuclear weapons.

In the meantime, I don’t expect “the ghost in the machine” to appear anytime soon.

Comments

Comments for this post are closed