Continous Action Space
This chapter kicks off the advanced reinforcement learning (RL) part of the book by taking a look at a problem that has only been briefly mentioned so far: working with environments when our action space is not discrete. Continuous action space problems are an important subfield of RL, both theoretically and practically, because they have essential applications in robotics, control problems, and other fields in which we communicate with physical objects. In this chapter, you will become familiar with the challenges that arise in such cases and learn how to solve them.
This material might be applicable even in problems and environments we’ve already seen. For example, in the previous chapter, when we implemented a mouse clicking in the browser environment, the x and y coordinates for the click position could be seen as two continuous variables to be predicted as actions. This might look a bit artificial, but such representation has a lot...