Steven Pinker on existential risk

He is harsh, but my view is not far from his:

The AI-existential-threat discussions are unmoored from evolutionary biology, cognitive psychology, real AI, sociology, the history of technology and other sources of knowledge outside the theater of the imagination. I think this points to a meta-problem. The AI-ET community shares a bad epistemic habit (not to mention membership) with parts of the Rationality and EA communities, at least since they jumped the shark from preventing malaria in the developing world to seeding the galaxy with supercomputers hosting trillions of consciousnesses from uploaded connectomes. They start with a couple of assumptions, and lay out a chain of abstract reasoning, throwing in one dubious assumption after another, till they end up way beyond the land of experience or plausibility. The whole deduction exponentiates our ignorance with each link in the chain of hypotheticals, and depends on blowing off the countless messy and unanticipatable nuisances of the human and physical world. It’s an occupational hazard of belonging to a “community” that distinguishes itself by raw brainpower. OK, enough for today – hope you find some of it interesting.

That is by no means the only harsh paragraph.  Here is the entire dialogue with Richard Hanania.  And be careful what you write in the MR comments section, the AIs are reading you!


Comments for this post are closed