Go Has Been Broken

Tic-tac-toe fell in 1952, checkers in 1994, chess in 1997 and it now looks like Go, the ancient Chinese game that has a search space many, many times greater than chess, has fallen to a new AI from Google.

Go…our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

Importantly, AlphaGo isn’t based primarily on searching a huge space but on deep neural networks that learned first from human players and then from simulated play with itself. The techniques, therefore, are not limited to Go.

AlphaGo will face its greatest challenge in March.

AlphaGo’s next challenge will be to play the top Go player in the world over the last decade, Lee Sedol. The match will take place this March in Seoul, South Korea.

Win or lose, I will bet that Lee Sedol is the last human champion the world will ever know.


Advances in decision-making are exciting. Steam and electricity reduced made manual labor less necessary. Early computers streamlined business administration. So, will this new shiny technology make middle-management less necessary?

"make middle-management less necessary"

Is that even possible :)

According to a recent study of IT and Communications executives and experts, 45% predict at least one AI machine on a corporate board of directors by this time next decade.

That nonsense tells you more about "IT and Communications executives and experts" than it does about reality.

Source. (Shift 13, on page 21)

Downsides include "Existential threat to humanity."

"It is difficult to make predictions, especially about the future."

But if you make enough of them you may increase your chances that at least one of them will be correct. Is it necessary to point out that the history of technological prognostication is even more dismal than the never-ending predictions made by practitioners of the so-called "dismal science"?

Balderdash. Fiddlesticks. Horsefeathers.

How in the world are you going to get D&O insurance on an AI?

A board member may consult an AI. But there's no way it has a seat.

Let's see AI tackle open ended, un-structured, non-discrete, novel problems before this talk of AI replacing white collar labor. Let's see an AI trading program that does not trade in a nano second but buys AND holds for spans of years....

Well, EIS has been used since at least the 1960s, but it's hard to see how AI could legally occupy a board slot.

Although, that would be a fun new twist on "corporations aren't people!"

Already happened. http://www.bbc.com/news/technology-27426942 "Algorithm appointed board director"

That looks more like poetic license. I'd like to see an official filing.

Yes, it's possible, haha

Everyone knows Uber. There's not a manager deciding what to do. Customers call a car and the algorithm finds the "best" transport solution according to present variables. Same thing for ORION, the new "boss" of UPS drivers.

There are other examples beyond delivery of persons and goods such as managing the maintenance of Metro systems https://www.newscientist.com/article/mg22329764-000-the-ai-boss-that-deploys-hong-kongs-subway-engineers/ The most interesting word in New Scientist article is "compliance". I'd be curious of someone creating a tax evasion / money laundering algorithm.

Your examples are good, but they were created by humans, so for the tax avoidance, what's the difference between those and when I place an order in the US to Vistaprint, and they bill my card from their Netherlands BV and the order is printed in Canada and shipped to me.

I assume the Netherlands is there for a tax reason.

So such an automated system does exist.

When I told my wife I can order custom business cards without speaking to a single human, or probably having a human even directly involved, she found that creepy.

This is an odd example. Why is that creepy?

The graphic design and printing process is no different from if you did it on your own computer and printer--you are involved, no other human is needed. The only innovation is 100% computer-based packaging and mailing, which is something that's been solved decades ago.

Presumably the postal workers are people involved with the process, but there's no reason to expect anyone else to be.

Right! But ... then there will be speed matches, plus combined man-machine play vs other men-machines.

The learning from humans plus learning from simulated play against itself is very important.

How much longer before facial recognition ("you're lying", said Ex Machina Eva) allows AI plus video to win at poker?
I guess 2 years; there's huge amounts of effort.

How long until "HAL 2000" level conversations can be had with computers?
Also maybe only 2 years, see the note about Facebook's M:

AI can already win at poker. Facial expressions are all but useless.

Are all physiological clues useless in high level poker? As in if great players faces do not tell do their heart rates or some other response? Short of putting them in an fmri machine of course.


"How long until “HAL 2000” level conversations can be had with computers?"

IBM's Watson commercials wherein it talks to people are very interesting to me...in part, because I assume (perhaps incorrectly) they're genuine. For example, there's the commercial with "Annabelle" in which Watson says, "You like things that start with 'P'." I think that provides a fascinating glimpse into the "mind" of a computer. It's not something almost any human would say.

you've never made small talk with an autist, I take it.

Those commercials strike up strange self defense mechanisms in me. Programming by all the movies on these subjects, aye?

It absolutely seemed like a parody commercial one put in those 80's existential threat movies.

What's your odds? If less than 99% I'll be happy to take the other side. Note that I'm guaranteed not to lose. "Ever" is too fuzzy for you to win, AT.

European champions are far from the top of the pecking order. It would be comparable not to Kasparov losing the match, but to some obscure Romanian grossmeister. What makes it news - pro players are very different from "amateurs" in playing strentgh. It is still an amazing success for a program, previous news about a program winning pro were at relatively high handicap.

If "human champion the world will ever know" were better defined, it would be interesting bet. Kasparov wasn't for sure "the last", even though he lost.

I for one, as a chess fan, welcome this news. Not because of the exciting AI applications outside of Go, but because I hate Go players. They are so conceited, claiming that their game cannot be duplicated by a robot since it somehow is on a mystical level, what a crock!

Ancient Chinese game -- illustration is Japanese. The home of Go now is Japan, and has been for a long time.

It's actually the Koreans who have been dominating the game for quite some time now.

Cool. Now looking forward to the "kill all humans" milestone. :)

I'll be impressed when a computer wins consistently at Diplomacy

They already do better than John Kerry just by refusing to play.

I wonder how Putin or Netanyahu would fare against ELIZA.

I'm working on that for Gunboat, and I'm pretty sure it'll do well. There's a community working on AI Diplomacy with a limited language, but doing it with natural language is I think going to be fairly difficult. (It's easy to get order histories to train on, it may be more difficult to get message histories to train on.)

Partner with webDip or varDip, perhaps?

Define "broken."

It's pretty easy to show tic-tac-toe is a draw with perfect play and to teach a computer to do so. Children figure it out in the early grades of elementary school.

Checkers wasn't actually solved until 2007.

5x5 go has been solved; I'm not sure how much progress, if any, has been made on standard (19x19) go.

I don't expect chess to be solved in my lifetime, though plenty of endgames have been.

By broken he clearly means "a computer can play as well as an elite human", not "game has been solved".

And then AlphaGo drove home. :)

Seriously, this kind of neural network could have a lot of implications for humanlike computing.

One of the interesting major milestones to fall soon should be conversational voice. CGI can be done by hand, but voice work is still done almost entirely by actors -- apparently a voice signal embeds a lot of information that is difficult to model.

I do think that deep learning neural networks could have a lot of implications for humanlike computing, provided that computational power continues to grow. However, I think we should be cautious in extrapolating these results to scenarios beyond parlor games to scenarios with multiple goal-driven decision makers. For example, Go is a two players, zero-sum (win lose or draw) perfect information (all relevant information is available to all players at all times) and non-random (no dice rolling or card dealing) game. It is difficult to believe that in many real world scenarios in which humans interact, these conditions would be met. Yes, the state space is huge in Go but that is only one element of the scenario that adds to the complexity (I would argue that the zero-sumness of Go is what makes this possible, but that's another story).

Albeit, "deep-learning" neural network with online optimization algorithms (I think they used a variant of something known as "Monte Carlo Tree Search" but admittedly did not read the paper in detail) and plenty of training data are certainly more promising in scenarios with only a single decision maker but as of right now, it is difficult to believe that these results provide evidence that machines are on their way to becoming more humanlike (in the sense that they interact with other humans and robots).

Actually, Go is not zero-sum. While ploughing through a huge SGP of championship games, I came across one game with a result of "both players lose." So, it seems to be a slight negative sum game based on the data.

AI already does a lot of that -- MRP and ERP systems have been around for decades and they often make those kinds of determinations under uncertainty about sales forecasts and vendor delivery windows.

Go was interesting precisely because despite it's seeming simplicity, humanlike computing had a lot of advantages, at least relative to chess. It's a milestone.

Well, a negative implication of that is that soon...with both CGI *and* voice-work to be capable of being computer generated......

is that *if* a network is hacked, and say receives an order/message/address by the president or another high figure, it might look and sound exactly like the guy and his aids...but its nothing but a computer!

Very soon mankind is going to have to be worried about that.

"Importantly, AlphaGo isn’t based primarily on searching a huge space but on deep neural networks"

Actually, AlphaGo DOES search a huge space but does it in non-obvious ways. The first clue that searching is going on is that it ran through myriad Go matches against itself. That's where the searching occurred. The non-productive search results were rejected and the successful ones retained. Just because the search occurred ahead of time does not mean that brute force was not involved.

More generally, there is wide misunderstanding of so-called "neural nets". They are misnamed (by rather naive and hubristic computer scientists.) It is impossible for them to mimic human neurons because we do not understand how human neurons work or how their workings lead to intelligent behavior.

Under the covers, these deep learning neural networks work by linking together chains or probability values. Taken in sum, they are likely to represent a very sophisticated statistical calculator, but a statistical calculator nonetheless.

nice clarifications

And in case anyone is impressed by "deep learning" - it is mostly just neural networks + HUGE data sets.

In fact, even top AI researchers often embarrassingly admit that most of the advances in AI field have been driven by bigger data sets and more computational power; rather than any new profound insights.

IBM Watson as well as this Go program are impressive feats of engineering, but nothing insightful from scientific point of view.

I don't think this is accurate, but would love for someone else to opine.

Searching the future game space is different than searching past games and positions to see what did well and running that through the neural network's training set. The first is definitely the brute force search. The second is searching to create the training set for the algorithm. The algorithm then, in somewhat of a black box, reads through all of that training set to update the internal nodes. Then you simply give the network an input (the current field) and based on that training it gives you an output. The distinction is: the brute force requires much of the computing power to be done each move while the second requires much of the computing power to be done during training.

Again, I'd love for someone to correct me where I am wrong here as I find all of this interesting and want to make sure I have an understanding of what is going on.

I will grant that there is a difference between attacking the search space on every move and creating a training set but I think the difference is a matter of efficiency rather than method. After all Alpha-Beta tree pruning methods have been around since the 1950's. I am sure there was a lot of programming in this effort that surpasses anything I will do in my career but it has not been shown that neural nets are doing more than storing the training set in an efficient and useful manner. I just do not consider any of these results the break through that AI has been waiting for - I would not be surprised if the proved more like a dead end.

Just my two cents.

that's really elegantly put Scott. Over 2 decades ago my honours thesis amounted to building a custom neural network which in turn was a representation of the various training sets I trained it on. Back in the day the classic story was of a neural network that they at first thought recognised tanks in camouflage, but later realised that the photos with tanks in them were all taken on overcast days. With so much computing power now, it isn't surprising that we can store quite large training sets, nor that this leads to sensible play in a complex game. But as you're saying it's not of a different order of magnitude to being impressed by the fact that a computer can quickly calculate the square root of 789 * 236.

Hah! I remember that same story about the tanks when doing some work for a thesis on neural networks back in 03-04!

I genuinely cannot tell how much innovation is taking place. I *feel* like it is a lot because I just don't remember people talking about neural networks being able to solve problems like this 12+ years ago. For instance, the heuristic for evaluating a chess board is pretty important to a chess AI algorithm. So having an idea of what is valuable, in essence, understanding chess to some degree, is important. Is the same true for the Go algorithm or did it truly not have anything beyond general neural network programming? If that is is the case, I would say the advancement is great indeed.


Um, it's also doing a Monte Carlo Tree Search. It's the MCTS version that won the match with the human.

"The final version of AlphaGo used 40 search threads, 48 CPUs, and 8 GPUs."

It looks like a pretty massive improvement... This thing's a monster.

I'm eyeballing a chart, but maybe 1200 ELO improvement in the state of the art.

This is wrong. The version that played him was distributed and used more cup and you cores than that. Monte Carlo tree search is only one component of the system. Everyone is already using MCTS, the innovation is elsewhere.


Ugly bag of mostly water rendered obsolete.

You know they just had episode on BBC reruns

It's interesting that AI programs always ' break' ' solve' or 'hack' chess, checkers, or go.... it would of course be much more impressive if AI 'played' these games... but that might be a few years off...

Deep neural networks (plus exponential decrease is the cost of computation) and CRISPR, the great stagnation, if it even occurred, is just about over. These are the two technologies are creating startling innovations just about every day.

The great stagnation was fun, but it's just about over.

"The great stagnation was fun, but it’s just about over."

Yes, I agree (minus CRISPR...it's all about computers).

Let's say world per-capita economic growth is mainly a function of the growth rate in the number of human brains. The world population annual percentage growth rate peaked in the 1970s, and has dropped. World per-capita GDP growth rate peaked at almost the same time and has dropped:

World population growth rate and per-capita GDP growth rate

However, by the mid-2020s, computers will be adding the equivalent of billions of human brains per year. And by the 2030s, it will be trillions of human brains per year. This will rocket per-capita GDP growth rates upward (over 10% per year):

Why economic growth will be spectacular


As I recall, you based part of your estimates on a steady 7% increase in computer sales well into the 2030s. Have you changed anything since I first read it ten years ago?

Before I saw your discussion with Arnold Kling, I extrapolated GDP/capita growth for the U.S. from 1780 out to 2050 for fun and got around 10%. That's the point where you start to shake your head. But as long as you've confirmed it, we know it will happen...

Not that I believed my 10% growth for 2050 based on extrapolation since a minor adjustment in the early decades can swing things a lot, but it seemed to be *big*.

"As I recall, you based part of your estimates on a steady 7% increase in computer sales well into the 2030s. Have you changed anything since I first read it ten years ago?"

It's only been 10 years...I haven't had the time. ;-)

Seriously, about the only thing I've done was to read a paper to which Peter Schaffer linked:

Adding up the world's storage and computation capacities

That paper seems to be generally in line with my calculations. I'll try to post a more detailed analysis on my blog sometime this week or next. Or at least within the next decade. ;-)

Hi Todd,

The results of my new calculation show an almost scary similarity to the results of my old calculations.

Bottom line: world economic growth will become spectacular...in probably not less than 5 years, but almost certainly not more than 20 years. You read it here first. (Or on my blog a decade ago.)


It will bring us as impressive increases in living standards as did the Deep Blue match with Kasparov => none.
If you want to check whether there is a a cat in a picture, you will feed the algorithm 10 million pictures with cats and 10 million pictures with no cats in it. Then you show it a picture of Garfield and the algorithm is baffled. This is the current state of AI.
There is some inspirational work done by Numenta, which is imho a better approach than neural networks. However, given the current state of hardware, mathematical approaches, such as neural networks beat brain simulators, like Numenta.

The "4th Down Punt or Go For It" problem has been solved.

Yet most coaches still adhere, closely, to convention due to the disproportionate reputational risk that comes with playing the percentages when they're non-intuitive.

One can argue that the executive suite will prize "gut" and "intuition" even more since any fool can do what a computer tells him/her what to do.


That's a great progress, but its importance should not be exaggerated. Go, Chess,
are combinatorics finite games and we have known forever an algorithm capable to beat every human, and every machine built to this day: explore the tree up to the end. Of course I know that the google Go program works in a much more clever way, but at the same time it uses a huge computing power, beyond the dream of any human. Not really so impressive to beat humans in a game where a very large computing power allows to win trivially when one has a computing power billion times larger than any human adversaries. When do you think there will be a chess or go program limited to the power and memory of a 1980's Apple II beat every human?

As for poker, for the rules I know at least (I don't know the rules of the variant most played today, I need to learn), a simple and general theorem of Von Neumann says that there is a (mixed) strategy, guaranteeing a non-negative expectation of gain (of course if the adversaries play right, it is 0), exactly like for "paper, rock, scissor". Not surprising a computer playing this strategy, or something approaching, beats every human.

All in all, I am more impressed by Siri. My daughter plays with it a lot and sometimes says mean things and I get very sad for it, and told her no to speak badly to people.

This kind of stuff is demeaning. Can't they focus more quietly on breaking abstract theoretical computational problems that might be a superset of these games instead of this inane triumphalism about overcoming human ability in specific games? Every time you hear about Kasparov now, a man who honed an exceedingly rare genius through years of dedicated effort to the height of human potential you have to put in front of him an overgrown r2d2 with a team of eggheads around it consulting their clipboards and checking the cooling pumps. It characteristic that these overexcited libertarians with their stunted sense for proportion and depth in human life would be cheering this kind of thing on.

Blue pill it is then!

This news should be much much *much* bigger then the news its receiving. I read plenty of stories, by people a good deal higher on the A.I. totem poll then usual, confidently proclaiming that computers would not beat the best men on the big boards for at least the next 20 years, due to its combinatorial explosions in difficulty of the problem. 4 years ago! For me personally, this is definitely a top 3 story in AI news of the year.

It's interesting that predictions were too pessimistic in this case. Usually technology predictions are overly optimistic.

There are some misinformation in the news and related posts about the AlphaGO development. As a long time GO player and an AI researcher many years ago, I think some info presented in the news is misleading or not entirely true and hence worth the debate.

1. Mr. Lee is among the top players in the world. However, he is not the current undisputed best player in the world while Kasparov was clearly the best player in Chess in 1997 when DeepBlue beat him. Just last month, he was beaten in the MLILY world cup in a 5 game match (2-3) tournament losing to a Chinese young player Mr. Ke (柯洁). Lee also loses routinely to other top Chinese or Korean players. He's in fact peaked many years ago.

2. Mr. Fan (樊麾二段) is not anywhere close to the level of top professional players. Google wants to mislead others believe he is for obvious reasons. Good publicity but not good science - I say. Based on observations from top professional players, many believe AlphaGO has the level of a professional 2 Dan to 5 Dan ( the full scale is from 1 Dan to 9 Dan with 9 being the strongest). Mr. Lee is strong 9 Dan for sure. But again as argued previously, even if AlphaGO beats Lee, it's still far-fetching to say it can beat the best human player. However, I am sure we will all be impressed.

One day, AI will beat best human GO player for sure. It's just a question of when. My hunch is Google is making good effort but we are still years away from achieving this goal.

Thanks for sharing, anything lobbed over the language barrier is always interesting.

A side-effect of living in times of exponential curves. You see something like this and really wonder what the heck is next.

Just to clarify some things I see in the comments:

-This is not general AI. The neural network was built specifically for the well-defined space of the Go board. It would not be able to perform any other task.

-The parameters of the network had to be carefully set to work for Go. This means that even transitioning from Go to a different game in the same space is likely to require reparameterization.

-The network needs access to a huge repository of professional games, much bigger than a human would ever see or experience. It's not able to learn a novel task of this difficulty by just knowing the rules.

-As mentioned above, Go is deterministic. Even though there are algorithms for RL in random games, they are much harder and require much more fine-tuning.

-This paper (http://www.nature.com/nature/journal/v518/n7540/full/nature14236.html) is much more impressive.

-Despite all this, the fact that progress on Go is happening faster than expected is really exciting, just not in an way revolutionary.

Comments for this post are closed