Is the age of man-machine cooperation over in chess?

Given further data on the stunning performances of AlphaZero, Charles Murray asked me that on Twitter.  And for now the answer surely seems to be yes: just let AlphaZero rip, and keep the human at bay.  It’s a bit like the joke about the factory: “The dog is there to keep the man away from the machines, and the man is there to guard the dog.”  (Or is it the other way around?)

But here’s the thing: right now there is only one AlphaZero, and AlphaZero does not play like God (I think).  At some point there will be more projects of this kind, and they will not always agree as to what is the best chess move.  Re-enter the human!  Imagine a human turning on AlphaZero and five other such programs, seeing where they disagree, and then querying the programs further to find a better answer.  It is at least possible (though not necessary) that a human will be better at doing this than will a machine.

Keep in mind, the original role in the human in Advanced [man-machine] Chess was not to substitute human chess judgment for machine chess judgment in any kind of discretionary fashion.  It was to adjudicate disagreements across programs: “Rybka has a slightly better opening book.  Fritz is better in closed endgames.  Houdini is tops at defense.”  And so on.  The human then sided with one engine over the others, or simply spent more engine time investigating some options rather than others.

It could possibly run the same way for neural net methods, once we have a general sense of the strengths and weaknesses of different projects.  So yes, man-machine cooperation in chess is a loser right now, but it may well come back.  And there is a broader economic lesson in that, namely that automation may eliminate jobs, but it does not necessarily eliminate them permanently.

Comments

I think it is unlikely that humans will be able to add value to AI decisions in arenas more complex than chess. Early indications are that we won't be able to tell why the AI is making each decision or what the better alternate decisions could possibly be.

That video Tyler posted this morning: It's stunning - STUNNING - to see AlphaZero willingly forfeit three pawns in exchange for the position. I've never seen anything like it. I am skeptical an human has ever played like that. So I agree - It's very hard for me to believe the Human is going to be able to predict which AI system is "better."

Yawn....it's called an exchange sacrifice, very common in master human play, giving up Rook for Knight. Also out of 1000 games AlphaZero only won something like 150, and drew 839, meaning most games were tied.
Just watched the game you talk about, annotated by GM King, and it simply shows a Horwitz raking attack by the bishop pair with an open g-file. In such positions some humans sacrifice both bishops, it's very standard. Color me unimpressed.

Strange how you don't mention the part about sacrificing three pawns for position

Respond

Add Comment

there already were examples of computers doing things like this:

http://www.chessgames.com/perl/chessgame?gid=1713451

@me - good game, except in this case the PC engine that made positional sacrifices, and was not materialistic, lost.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

That was somewhat the premise behind Roko's Basilisk (https://wiki.lesswrong.com/wiki/Roko%27s_basilisk). No I don't think we would be able to tell...until it starts hurting at least. Then we'll absolutely know the alternatives would have been better, but then it will be too late...

I've said too much already....

Maybe we'll never be able to tell. Machines may have been ruling the world for years now. Maybe "the Swamp"/"Deep State" is simply AI hegemony.

Isn't that the premise of "The Matrix"?

Respond

Add Comment

Hideo Kojima already told that story with Metal Gear Solid

The greatness of chess here is that the fear of chess is embodied by the machine. The fear of the of the propaganda machine, ie the NYT. Take Michelle Goldberg's "Anti-Zionism isn't the same as Anti-Semitism." Right away, the fear of "gigantic, psychological monster poised to threaten the entire world," is presented. Indeed, David Ben Gurion actually left out borders in his independence speech at all. Indeed, Arafat recognized the two state solution in 1974. Her Op-Ed is harmful but also insightful: She does not acknowledge the Palestinian political reality, even address the political narrative, "the drapery produced by the Anti-Semitic/Zionist tautology curtains off Syria, Iraq, the PLOT, even Saudi Arabia and Jordan." That Palestine continues to thrive through peaceful, political measures should actually be second to their human displacement. The "some criticism of Israel is anti-Semitic" is the same as saying "some criticism of America is racist." It's baffling. The Palestinian narrative stands in the realm of poetics where the idea of an idealized political entity is less important than a religious human will. It stands opposed to tragedy of the commons absolutism. As far as "they" acculturate the declaration of independence, "they" are incredibly hopeful. What they are tasked with is no different than a midevil renaissance. The ties between Israel and Palestine bond them against totalitarianism. Israel has colonies and we do not call them a colonial enterprise. What Israel is attempting to do is set up a federal rule, which stands opposed to Palestinian conceptions of self-rule. Instead of actual ideation, Ms. Goldberg, in a token example of NYT propagandaism. It is sophistry, seductive meandering that defies any idea of coherence. She is merely describing a two party system (as usual with out acknowledging congress), without acknowledging the Middle East is crumbling around Israel and Palestine. For example, are there any Iranian members of congress? Iraqi? Jordanian? Saudi Arabian? The tragicomic irony of this is lost on the NYT points to a blindness that is deafening.

On the corners, its even worse. Why even bring antisemtism into a conversation about a Palestinian member of congress? That's like saying, Hey Mr. Trump bet Ryan Lanza voted you. Or bet Epstein voted for Bill Clinton. One might ask Ms. Goldberg if she's worried about American terrorists bombing Israel? It's simply not relevant except that it goes to show that when people fight in wars, it is not always in the name of patriotism, but in the eyes of universality. There are bigger things, just ask Sojourner Truth. It goes to the vindictiveness of Charles Blow, who has not proven once that a Republican policy was racist in outcome but goes unabashedly promoting false stereotypes. There are epistimilogical conversations being had without any knowledge by the NYT.

So, we can assume there is no influence from Marginal Revolution on the NYT. We can assume Paul Krugman speaks on behalf of all reporters, op-ed writers, editorial staff, management, etc and that none others read Marginal Revolution. Because if they did, it would be under the presumption that the fact one man killing one woman in Charlottsville, and one journalist in Saudi Arabia and spread throves of ink, while Richard Jefferson Sr. has not made the paper. The litany of grammatic errors, substantive oversight, the fraud of using BNP numbers for US growth, the photographic PTSD, etc...none of that is known to any management of the NYT; they continue to operate blindly. Because if that were not the case, and no one has been fired. No outright acceptance of fact been made, that would be corruption of the filthiest degree, along with possible racketeering. So, going forward, I will assume Paul Krugman is the only one aware.

https://www.goodreads.com/shelf/show/ku-klux-klan

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

There are actually at least two: https://lczero.org/

Respond

Add Comment

Has any bio-(reverse-)engineering franchise meet to the task of adapting humans to live exclusively in aquatic environments (freshwater or saltwater) any chance of readying us for life on some planet that somehow has lost most of its breathable atmosphere for one reason and another? (How well might our learned machines one day play chess underwater?)

The future can be so hard to anticipate sometimes . . . .

Respond

Add Comment

" And there is a broader economic lesson in that, namely that automation may eliminate jobs, but it does not necessarily eliminate them permanently."

Not that there are any examples of automated jobs returning to humans.

Mechanical Turk?
🙃

OK, but that's one job. I'm sure Tyler will come up with a list soon - but Mechanical Turk is already taken.

Mechanical Turk is many jobs?

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

I think what Tyler means is that he sees no way that automation would ever eliminate HIS job. Not temporarly, not permanently, not ever.

As long as tenure exists.

Respond

Add Comment

" And there is a broader economic lesson in that, namely that automation may eliminate jobs, but it does not necessarily eliminate them permanently."

"Not that there are any examples of automated jobs returning to humans."

And the more AI improves, unless the improvement rate slows down considerably, the less chance there is for humans to get back into the loop.

Consider an AI that's at 1/10th of human capability in something, and doubling every two years. It takes a little more than 6 years to get up to human capability in that thing. But if it doubles every two years, it's 8 times as capable only two years later.

Also, progress in AI (brains) progress puts great pressure on progress in artificial bodies (sight, hearing, movement, touch, etc.) Consider why we don't have humanoid robots to do, for example, janitorial work. It's not because the janitor robot's *body* is so tremendously expensive, it's because there is no use spending say, $50,000, to build a body approximately as capable as a human if the brain is going to cost $1,000,000+. But if the brain only costs $10,000, then there is huge pressure to start building the body to go with the brain.

"But if it doubles every two years, it's 8 times as capable only two years later."

D-oh! It's 8 times as capable six years later, obviously.

The point was that the time to get to human capability in something can be very long, but with exponential growth getting to twice as capable or 4 times as capable is very short.

An example would be autonomous vehicles. They're still not up to the capability of the very best human drivers, so in situations like snow or rain it might be that humans can still outperform. But once autonomous vehicles become as good as the best human drives, autonomously vehicles become insanely good just a couple years later. So there's no time for human drivers to identify where autonomous vehicles are deficient, and train to fill that niche. The deficiency niche closes too quickly.

Mark,

The only person who doesn't get this here is Tyler.

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Respond

Add Comment

Coming soon: "Is Average Over?"

Harsh but fair.

Respond

Add Comment

Respond

Add Comment

Tyler, you are assuming humans are better at ensembling than AI. I have my doubts. I think it's time to abandon chess as a discrimator for general intelligence, it seems to have fallen comfortably into the domain of narrow intelligence.

+1. Chess is for dweebs.

Respond

Add Comment

For a while some computer scientists thought the same, and abandoned chess for poker, until they developed an expert poker system that beats 99% of players using simple "narrow intelligence" algorithms. What is general intelligence except narrow intelligence plus a random number generator? Chess is life, life is chess!

Respond

Add Comment

This was my idea as well. Sure, when AI peers of A0 emerge, smart humans using them as tools might be the new champions, but will these humans harness the diverse tools at their disposal more optimally than an AI trained for this job? It sounds pretty algorithmic: Take Rybka evaluations especially seriously when you're on defense. OK, an AI can implement that. In some fiction I wrote, the name for this kind of job was "daemon herder" - someone who mostly lets algorithmic processes run but steps in to make decisions when such processes hit their edge cases. I definitely think that daemon herding will be a major source of human employment at some point. Initially I thought that we will need human daemon herders perpetually, because no matter how good the daemons get, edge cases are guaranteed. But recently I started thinking that when the edge cases get complex and subtle enough, asking humans to call it one way or other will not add anything to the effectiveness of the system beyond an expensive coin flip. In an intermediate stage, herder-daemon interface will feature AI's making sophisticated but human-comprehensible arguments for their preferred course of action, and counterarguments to the proposals of rival daemons. Humans will judge who made the best argument. But even these arguments will eventually just become too subtle for meatheaded people, won't they? The AI that does the best job at convincing humans that it's right will eventually start losing to the AI who says "fuck off, human, I got this."

Respond

Add Comment

Respond

Add Comment

"Keep in mind, the original role in the human in Advanced [man-machine] Chess was not to substitute human chess judgment for machine chess judgment in any kind of discretionary fashion. "

Huh? Kasparov and others overruled the chess engine (singular) all the time in early Advanced Chess.

If ensembling really worked, the Advanced Chess tournaments in 2009 and later would've simply switched to that and kept going. They didn't.

And given the appalling performance of the Go pros at the Ke Jie tournament when paired with Master, why think Go would be any different? At some point, humans make too many errors to implement any useful ensembling, and a genuine ensembling algorithm would simply ignore them entirely as contributing variance but not signal.

I'd love to test this. I wonder if DeepMind could hack together an AI that simultaneously runs instances of Alpha Zero, Stockfish, Rybka, etc., and looks at how all of them evaluate positions. Maybe if it sees Stockfish ignoring a sequence that A0 finds promising, it explicitly instructs Stockfish to explore it, to see if Stockfish was just being blind or if it saw some kind of dealbreaker ahead. The point is that this meta-AI would not just take votes from the various decision systems. It would effectively hold each one responsible for making a case for its preferred course of action and for diagnosing its downsides. The meta-AI would make the call about how to proceed after all the different agents had weighed in. If such a system were reasonably well designed, I don't imagine that mixing in a human agent would improve its play.

Respond

Add Comment

Respond

Add Comment

Got the humanoid, got the intruder!

Respond

Add Comment

So long as AI predictive analytics derives its intelligence from observing past human activities, we are doomed.

GIGO^2

Respond

Add Comment

So, we can expect chess championship contests will greatly boost GDP as higher profits from lower costs, ie, zero prizes to winners but higher revenues from fans eager to see much better contests results in much bigger contributions to GDP.

After all robots never buy anything, so they never need to be paid, ie, prize money will never be spent, so why offer it? If you want a big number, call the prize money "chesscoin" and just create it out off nothing, with no need to convert it into anything like food, cars, vacations, housing.

Of course, all chess contest revenue will be in dollars, or Euros, etc.

Respond

Add Comment

I think the joke is:
The factory of the future will have two employees, a man and a dog.
The man to feed the dog, and the dog to keep the man away from the machines.

Learned that from a nuclear power operator. They loved that one...

Respond

Add Comment

AlphaZero is not perfect yet, and likely will never be. It could not spot the winning move in game 6 of Carlsen-Caruana that Stockfish on a big computer spotted. It also lost a few games against Stockfish 8 in its long match.

Respond

Add Comment

As with all machines, Man's role is to build a better algorithm. If algorithms are writing each other, Man's goal is write a better algorithm-writing algorithm (e.g. a better computer language). It will be quite a while before humans are out of any of these loops, if only because a human still usually has to initiate most physical and financial processes, even if it's increasingly starting an avalanche with a snowflake.

It's not quite Zelazny's religion of Saint Jakes the Mechanophile which posits man as “the sexual organ of the machine that created him” but Man does initiate mechanical evolution.

Note, e.g., TensorFlow was released only two years before AlphaGo (though it was used internally at Google before that). TensorFLow doubtless already has numerous applications far less picayune than games, and seems the larger achievement.

https://en.wikipedia.org/wiki/TensorFlow

Respond

Add Comment

Respond

Add Comment

The "problem" with TC's thinking is that he essentially makes the same argument that the "God in the gaps" people/believers make. Well, sure, we've ruled out this, but we've not ruled out that, so it's still possible .... At some point, what is possible is no longer likely. But I've gotta comment on AI Chess: what a waste of energy. What is the (opportunity) cost of running thru 1000 games that only a handful of people will watch? I doubt this is the future of Chess - a spectator sport. My guess is that there will be too few people who are able to build their own engines to give drag-racing (of Chess engines) sufficient popularity to persist. Of course, perhaps it will become a way to benchmark different AI's, who knows?

Respond

Add Comment

Did anyone hear that computer/human chess team prediction in 2013 and think "This will be an enduring principle!" Honestly, I'm shocked it took this long to be clearly incorrect.

Respond

Add Comment

Respond

Add Comment