The agent will interact with the environment and, given a state, will try to execute the best action. The agent will initially execute random actions and, as the training progresses, the actions will be based more on the Q values given a state. The value of the epsilon parameter determines the probability of the action being random. Initially epsilon is set to 1 to make the actions random. When the agent has collected a specified number of training samples, the epsilon is reduced in each step so that the probability of the action being random is reduced. This scheme of basing the action on the value of the epsilon is called the Epsilon greedy algorithm. We define two agent classes as follows:
- Agent: Executes actions based on the Q values given a state
- RandomAgent: Executes random action
The agent class has three functions with the following functionalities...