We will extend the same code to train an agent on the LunarLander problem, which is harder than CartPole. Most of the code is the same as before, so we will only describe the changes that need to be made to the preceding code. First, the reward shaping is different for the LunarLander problem. So, we will include a function called reward_shaping() in the a3c.py file. It will check if the lander has crashed on the lunar surface; if so, the episode will be terminated and there will be a -1.0 penalty. If the lander is not moving, the episode will be terminated and a -0.5 penalty will be paid:
def reward_shaping(r, s, s1):
# check if y-coord < 0; implies lander crashed
if (s1[1] < 0.0):
print('-----lander crashed!----- ')
d = True
r -= 1.0
# check if lander is stuck
xx = s[0] - s1[0]
yy...