The man-machine battle moves beyond the board

Korea is, in general, a TV-friendly country. Regular restaurants have sets continuously showing soap operas, news, or reality shows. The surprise for the last two days was that my lunch restaurants were showing a board game. Students all over campus were watching it too. Baduk (a.k.a. go) is quite popular TV entertainment, but very niched and relegated to special baduk channels. I have certainly never seen it in a restaurant before. The reason—you probably already know—was that one of the very top players, Lee Sedol, was taking on a computer program, AlphaGo, for a 6 round match and 1 million dollars for grabs. Will this be the final battle between man and machine over the board?

Right now, AlphaGo has won the first two games. While it is, of course, too early (n = 2) to jump to the conclusion that AIs are better than humans at go, that moment can’t be too far away. This is not completely surprising. I haven’t really followed the go programming development, but I remember the excitement 3~4 years ago when go programs first started beating elite human players. The key development at that time—Monte Carlo Search Trees (MCST)—is something quite bizarre that I will mention later. Now Google’s DeepMind has topped it up with trendy deep learning and PR machinery worthy of an Olympic opening ceremony. The result—I get a break from sobbing housewives as lunch entertainment.

Knowing the answer, it might seem inevitable that machines would finally conquer go. Before the success of MCSTs, however, you could hear that it might never happen. Why this progress has been so hard to foresee due to quite an interesting discrepancy between human and artificial intelligence.

The main challenge for both man and machine is the combinatorial explosion of the number of positions a few moves into the future. For chess and xiangqi, it is an exponential increase with a base of around 30. For go the base has been estimated to be 250. The entire search-tree size of go is some million googols times the number of protons in the universe, i.e., too large to ever just search (or “calculate” in the lingo of chess players). This means that: sure, you can look into the future, but at some point, you just have to assess the position itself, i.e., from the configuration of the pieces or stones. For computer programs, this is called an eval(uation) function (or “understanding” among chess players). If the eval after an opponent’s move is too good for you, you will stop considering that direction of the game—after all, the opponent should be smart enough to avoid it (otherwise, that’s just a pleasant problem). If the eval after your move is too bad, you’d also stop considering it. In this way, one can tame the combinatorial explosion, at least for some moves ahead, and make a move.

Human players, it turns out, have amazingly good built-in eval functions. A modern chess program on a decent desktop evaluates ten million positions per second. How could it take until the 90s for chess programs to beat the best human players? After all, chess programming was the Drosophila of artificial intelligence for ages (maybe since The Turk, perhaps since Turing’s “paper machine”). The reason is that computer eval functions are much worse than those of humans. Computers need to compensate for this with their speed and ability to look far and wide. With AlphaGo, go engines seem to have improved their eval function enough to bring the prize home.

A combinatorial board game program is more than an eval function to bound a search tree. They also need to propagate information about possible future positions back to the present. This means they need to model the opponent and his or her limitations. Current programs mostly try to be as safe as possible (i.e., assuming the opponent plays as well as possible). This, so-called minimax decision-making is something human players sometimes alleviate to steer the game to positions where the opponent is more likely to make mistakes. Another difference between humans and programs is that the latter evaluate a position as a number, while humans have a richer impression of positions. I just mention these points to argue that machines have a long way to go to fulfill their potential. This seems to be the case, in particular, for go. The success of MCST is proof in itself. The idea of MCST is to just disregard some random positions along the way to favor depth of the search in favor of width. The fact that it works gives me the feeling that the eval function is far from optimal.

As an armchair chess fan, I would love to see computers pointing to new possibilities in the game. They have made players re-evaluate some openings and endgames, but it is far from a superior intelligence seeing the board as the only sober person in a room of partying frat boys. AlphaGo seems to do a bit of that, making me jealous. The Korean expert commentators (according to friend and colleague Juyong Park (if this is wrong, it’s probably my misunderstanding)) were initially dissing AlphaGo’s moves, only to take it back later. At the end of the game, the attitude turned to “let’s learn from the new master” mode.

Chess engines are victims of their own success. Currently, there are two chess programs (or engines) dominating the chess program tournaments (yes, they exist! there is even a film about them!)—the proprietary Komodo and the open-source Stockfish. Reading Stockfish’s source code is very disappointing. It reads just like a chess instruction book. The eval function breaks down the game into the classical three phases (opening, middle game, and end game), etc. Obviously, that approach will never suggest any super-weirdo outer-space level moves that later prove winning. (Maybe there are none, but anyway.) I hope AlphaGo can stir the world of chess programming.

As for go, this will be the beginning of a new era. The player not learning from the computers will be left behind. This will probably be a change for the better. The thing that makes chess fun, even for a crappy player like me, is to computer analyze games to find where things went wrong. You’d have to be very conservative not to appreciate that. At the end of the day, board games are games between people. The human factors give all the entertainment. Speaking of which: The chess world championship candidate tournament—the second most important event in chess—starts today in Moscow! Isn’t that even better than AlphaGo vs. Lee? Go Nakamura!

One thought on “The man-machine battle moves beyond the board

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s