AlphaGo Zero and MuZero
Model-based methods allow us to decrease the amount of communication with the environment by building a model of the environment and using it during training. In this chapter, we take a look at model-based methods by exploring cases where we have a model of the environment, but this environment is being used by two competing parties. This situation is very common in board games, where the rules of the game are fixed and the full position is observable, but we have an opponent who has the primary goal of preventing us from winning the game.
A few years ago, DeepMind proposed a very elegant approach to solving such problems. No prior domain knowledge is required, but the agent improves its policy only via self-play. This method is called AlphaGo Zero and was introduced in 2017. Later, in 2020, they extended this method by removing the requirement for an environment model, which allowed it to apply to a much wider range of RL problems (including...