Implementing AlphaGo Zero
At last, we will implement AlphaGo Zero in this section. In addition to achieving better performance than AlphaGo, it is in fact relatively easier to implement. This is because, as discussed, AlphaGo Zero only relies on selfplay
data for learning, and thus relieves us from the burden of searching for large amounts of historical data. Moreover, we only need to implement one neural network that serves as both the policy and value function. The following implementation makes some further simplifications—for example, we assume that the Go board size is 9 instead of 19. This is to allow for faster training.
The directory structure of our implementation looks such as the following:
alphago_zero/ |-- __init__.py |-- config.py |-- constants.py |-- controller.py |-- features.py |-- go.py |-- mcts.py |-- alphagozero_agent.py |-- network.py |-- preprocessing.py |-- train.py `-- utils.py
We will especially pay attention to network.py
 and mcts.py
, which contain the implementations...