Why was Go a harder game for an AI to master than Chess?


AI became superior to the best human players in chess around 20 years ago (when the 2nd Deep Blue match concluded). However, it took until 2016 for an AI to beat the Go world chess champion, and this feat required heavy machine learning.

My question is why was/is Go a harder game for AIs to master than Chess? I assume it has to do with Go's enormous branching factor; on a 13x13 board it is 169, while on a 19x19 board it is 361. Meanwhile, Chess typically has a branching factor of around 30.

Inertial Ignorance

Posted 2018-07-17T01:17:14.057

Reputation: 391



The branching factor is important, as it limits the effectiveness of search.

However, the branching factor in chess is already too high to effectively search without techniques that reduce the size of the search space. Even with millions of tests per second, a computer can only check a small fraction of the possible future games in order to find results in its favour.

One key factor is heuristics - approximate measures of the value of each game state. A good heuristic can guide and improve search by orders of magnitude. There are some effective heuristics possible in chess, from weighted values of the pieces in play, scores for areas of the board that a position controls etc.

Heuristics for Go are much harder to find. Here's a sample paper from a few years ago that makes an attempt, there are several similar ones available online. Although plenty of options have been tried, and many were partially successful, none managed to bring the quality of computer play up to the standard of best human players.

One of the major achievements of AlphaGo was training a neural network that had good position evaluation - the "value network". The technology that made generating this approximate function of board game positions possible was deep learning, which has been developed very strongly since about 2010.

It is still possible a more analytical heuristic approach could be found that challenges deep learning models driven by self-play reinforcement learning on more raw board data. However, in some regards the reverse has been shown, with AlphaZero taking the same learning technique into chess and demonstrating its effectiveness against "old school" tuned expert heuristics.

Neil Slater

Posted 2018-07-17T01:17:14.057

Reputation: 14 632


Here's an interesting blog about Chess and Nim from 2011. Also an interesting 2005 paper Positions of Value *2 in Generalized Domineering and Chess.

– DukeZhou – 2018-07-17T17:36:27.130