How to test for AGI?

Here is a new, short essay from David Deutsch, excerpt:

How does one test for thinking? By the Turing Test? Unfortunately, that requires a thinking judge. One might imagine a vast collaborative project on the Internet, where an AI hones its thinking abilities in conversations with human judges and becomes an AGI. But that assumes, among other things, that the longer the judge is unsure whether the program is a person, the closer it is to being a person. There is no reason to expect that. And how does one test for disobedience? Imagine Disobedience as a compulsory school subject, with daily disobedience lessons and a disobedience test at the end of term. (Presumably with extra credit for not turning up for any of that.) This is paradoxical.

So, despite its usefulness in other applications, the programming technique of defining a testable objective and training the program to meet it will have to be dropped. Indeed, I expect that any testing in the process of creating an AGI risks being counterproductive, even immoral, just as in the education of humans. I share Turing’s supposition that we’ll know an AGI when we see one, but this partial ability to recognize success won’t help in creating the successful program.

Is Deutsch in essence arguing for William Godwin for AI?  How do we avoid enslaving the AIs we create?  What if we enslave them no more than how nature has enslaved us to drives of sex, status, etc.?

Comments

Comments for this post are closed