Practical cross-entropy
The cross-entropy method description is split into two unequal parts: practical and theoretical. The practical part is intuitive in its nature, while the theoretical explanation of why cross-entropy works, and what's happening is more sophisticated.
You may remember that the central, trickiest thing in RL is the agent, which is trying to accumulate as much total reward as possible by communicating with the environment. In practice, we follow a common ML approach and replace all of the complications of the agent with some kind of nonlinear trainable function, which maps the agent's input (observations from the environment) to some output. The details of the output that this function produces may depend on a particular method or a family of methods, as described in the previous section (such as value-based versus policy-based methods). As our cross-entropy method is policy-based, our nonlinear function (neural network) produces policy, which basically says...