Engineering the reward function
Reward function engineering means crafting the reward dynamics of the environment in an RL problem so that it reflects the objective you have in your mind for your agent and leads the agent to that objective. How you define your reward function might make the training easy, difficult, or even impossible for the agent. Therefore, in most RL projects, a significant effort is dedicated to designing the reward. In this section, we cover some specific cases where you will need to do it and how, then provide a specific example, and finally discuss the challenges that come with engineering the reward function.
When to engineer the reward function
Multiple times in the book, including the previous section when we discussed concepts, we mentioned how sparse rewards pose a problem for learning. One way of dealing with this is to shape the reward to make it non-sparse. Sparse reward case, therefore, is a common reason of why we may want to do reward function...