Chapter 18. AlphaGo Zero
In the last chapter of the book, we'll continue our discussion about the model-based methods and check the cases when we have a model of the environment, but this environment is being used by two competing parties. This situation is very familiar in board games, where the rules of the game are fixed and the full position is observable, but we have an opponent who has a primary goal of preventing us from winning the game.
Recently, DeepMind proposed a very elegant approach to such problems, when no prior domain knowledge is required, but the agent improves its policy only via self-play. This method is called AlphaGo Zero, and it will be the main focus of the chapter, as we implement the method for playing the game, Connect4.