The match got a lot of press, and I doubt anyone is surprised that Alpha Zero crushed Stockfish.
To me, what's really salient is that "much like humans, AlphaZero searches fewer positions that its predecessors. The paper claims that it looks at "only" 80,000 positions per second, compared to Stockfish's 70 million per second."
For those who remember Matthew Lai's GiraffeChess:
However, it is interesting to note that the way computers play chess is very different from how humans play. While both humans and computers search ahead to predict how the game will go on, humans are much more selective in which branches of the game tree to explore. Computers, on the other hand, rely on brute force to explore as many continuations as possible, even ones that will be immediately thrown out by any skilled human. In a sense, the way humans play chess is much more computationally efficient - using Garry Kasparov vs Deep Blue as an example, Kasparov could not have been searching more than 3-5 positions per second, while Deep Blue, a supercomputer with 480 custom ”chess processors”, searched about 200 million positions per second 1 to play at approximately equal strength (Deep Blue won the 6-game match with 2 wins, 3 draws, and 1 loss).
How can a human searching 3-5 positions per second be as strong as a computer searching 200 million positions per second? And is it possible to build even stronger chess computers than what we have today, by making them more computationally efficient? Those are the questions this project investigates.
[Lai was tapped by DeepMind as a researcher last year]
But what I'm interested in at the moment is the decision speed in these matches:
- What was the average time to make a move in the AlphaZero vs. Stockfish match?