Given further data on the stunning performances of AlphaZero, Charles Murray asked me that on Twitter. And for now the answer surely seems to be yes: just let AlphaZero rip, and keep the human at bay. It’s a bit like the joke about the factory: “The dog is there to keep the man away from the machines, and the man is there to guard the dog.” (Or is it the other way around?)
But here’s the thing: right now there is only one AlphaZero, and AlphaZero does not play like God (I think). At some point there will be more projects of this kind, and they will not always agree as to what is the best chess move. Re-enter the human! Imagine a human turning on AlphaZero and five other such programs, seeing where they disagree, and then querying the programs further to find a better answer. It is at least possible (though not necessary) that a human will be better at doing this than will a machine.
Keep in mind, the original role in the human in Advanced [man-machine] Chess was not to substitute human chess judgment for machine chess judgment in any kind of discretionary fashion. It was to adjudicate disagreements across programs: “Rybka has a slightly better opening book. Fritz is better in closed endgames. Houdini is tops at defense.” And so on. The human then sided with one engine over the others, or simply spent more engine time investigating some options rather than others.
It could possibly run the same way for neural net methods, once we have a general sense of the strengths and weaknesses of different projects. So yes, man-machine cooperation in chess is a loser right now, but it may well come back. And there is a broader economic lesson in that, namely that automation may eliminate jobs, but it does not necessarily eliminate them permanently.