How Facebook’s AI Researchers Built a Game-Changing Go Engine

The best human players easily beat the best computer-based Go engines. That looks set to change thanks to a new approach pioneered by Facebook’s artificial intelligence researchers.

One of the last bastions of human mastery over computers is the game of Go—the best human players beat the best Go engines with ease.

That’s largely because of the way Go engines work. These machines search through all possible moves to find the strongest.

While this brute force approach works well in draughts and chess, it does not work well in Go because of the sheer number of possible positions on a board. In draughts, the number of board positions is around 10^20; in chess it is 10^60.

But in Go it is 10^100—that’s significantly more than the number of particles in the universe. Searching through all these is unfeasible even for the most powerful computers.

So in recent years, computer scientists have begun to explore a different approach. Their idea is to find the most powerful next move using a neural network to evaluate the board. That gets around the problem of searching. However, neural networks have yet to match the level of good amateur players or even the best search-based Go engines.

Today, that changes thanks to the work of Yuandong Tian at Facebook AI Research in Menlo Park and Yan Zhu at Rutgers University in New Jersey. These guys have combined a powerful neural network approach with a search-based machine to create a Go engine that plays at an impressively advanced level and has room to improve.

The new approach is based in large part on advances that have been made in neural network-based machine learning in just the last year or two. This is the result of a better understanding of how neural networks work and the availability of larger and better databases to train them.

This is how Tian and Zhu begin. They start with a database of some 250,000 real Go games. They used 220,000 of these as a training database. They used the rest to test the neural network’s ability to predict the next moves that were played in real games.

This produced a decent Go engine, which Tian and Zhu call Darkforest. This gains a pretty good ranking in matches against humans.

Go has a comprehensive, if complex, system for rankings players. Beginners are given kyu ranking that ranges from 30kyu (the lowest) to 1kyu. Better players can achieve a dan level from 1d (the lowest) to 7d (an advanced amateur). The best players have professional levels ranging from 1p to 9p (the highest level).

Darkforest plays at the 1d-2d level, a decent amateur level. That’s significantly better than other neural network based Go engines which play at 4-5 kyu.

But Darkforest has a significant weakness. While it is good at evaluating the global board position, its local tactics are poor, a common problem with neural network-based engines.

But search-based engines have exactly the opposite weakness. They are good at local tactics, since they can search through many similar positions, but they are weak at evaluating the global strength of a position.

That suggests an obvious way forward—to combine a neural network with a search-based approach.

That’s easier said than done, however. Search-based engines work much faster than neural nets, typically examining some 16,000 positions per second. By comparison, Darkforest takes 0.2 seconds to do its thinking.

For this reason, combining these approaches is a nontrivial task, which Tian and Zhu solve by running the processes in parallel with frequent communication between the two.

And the results are impressive. Tian and Zhu call the combined engine Darkfores2. “Adding [a search-based approach] to Darkforest creates a much stronger player,” they say.

These guys have tested the new engine by playing it against existing machines. They say it beats Darkforest, the neural network alone, some 90 percent of the time and beats Pachi, one of the best search-based engines, more than 95 percent of the time.

That’s an impressive result. Darkfores2 does not yet have a ranking but it is clearly a more powerful player than its predecessor ranked at 1d-2d.

This kind of research is still in its early stages, so improvements are likely in the near future. It may be that humans are about to lose their mastery over computers in yet another area.

Leave a Reply

Your email address will not be published. Required fields are marked *